Abstract

Numerous contour-based corner detection (CBCD) algorithms have been proposed recently, necessitating effective and practical evaluation. Most existing methods evaluate corner detection accuracy through metrics between the testing image and its attacked versions or rely on image-specific ground truth for corner evaluation. These methods use images as input, failing to solely evaluate the corner detection performance but combining it with contour extraction evaluation. Since contour extraction is another important research topic and existing CBCD algorithms almost have no contribution to contour extraction, this intertwining may negatively impact the evaluation results, hindering corner detection development. Furthermore, most evaluation methods directly provide simple statistical scores of evaluation metrics, such as the mean value, which are inadequate to reflect the overall performance distribution. This study presents a novel benchmark that is specifically designed for assessing CBCD methods, which includes two major contributions. Firstly, we design two dedicated datasets, one with the ground-truth corner and the other without them. The dedicated contours instead of images are employed as input to evaluate numerous CBCD methods, eliminating the impact of the extracted contour quality. When the ground-truth corners are unavailable, we employ additional contour attacks, including Gaussian noise, projective, and combined geometry on contours, to simulate real-world complex image processes compared with the attacks in existing evaluation methods. Secondly, we evaluate the performance of twelve CBCD methods using six distinct metrics based on the constructed contour datasets. To gain a deep insight into the overall performance distribution, the sign test method for hypothesis testing is utilized alongside some simple statistical measures for evaluation metric analysis. Experimental results demonstrated that no individual method performs the best across all six evaluation metrics, while different CBCD algorithms have their positive scenarios. The evaluation code will be publicly available at https://github.com/roylin1229/CBCD_eva.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call