Abstract

The SIFT (scale invariant feature transform) has demonstrated its superior performance in identifying transformed images over many other approaches. However, both of its detection and matching stages are expensive, because a large number of keypoints are detected in the scale-space and each keypoint is described using a 128-dimensional vector. We present two possible solutions for feature-point reduction. First is to down scale the image before the SIFT keypoint detection and second is to use corners (instead of SIFT keypoints) which are visually significant, more robust, and much smaller in number than the SIFT keypoints. Either the curvature descriptor or the highly distinctive SIFT descriptors at corner locations can be used to represent corners.We then describe a new feature-point matching technique, which can be used for matching both the down-scaled SIFT keypoints and corners. Experimental results show that two feature-point reduction solutions combined with the SIFT descriptors and the proposed feature-point matching technique not only improve the computational efficiency and decrease the storage requirement, but also improve the transformed image identification accuracy (robustness).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call