Abstract

Features are distinctive landmarks of an image. There are various feature detection and description algorithms. Many computer vision algorithms require matching of features from two images. Large number of correct matches with homogeneous distribution in the images is needed for robustness of the image matching. The matches are generally obtained using a feature distance threshold and ambiguous matches are rejected using a ratio test. This paper proposes a method that can be added to image matching pipeline for enhancing homogeneous distribution and increasing the number of matched feature points. After successfully matching an image pair, spatially close feature points go through an elimination process which aims to decrease ambiguity at the second matching step. Then, a coarse geometric transformation between two images is calculated, through which the detected feature points in one image (i.e. the moving image) are projected to the other image (i.e. the fixed image). Then, feature points from the moving image are matched to neighboring feature points of the fixed image within a pre-determined spatial distance. This narrows down the possible candidates and enables less correct matches being rejected because of the ratio test. The effectiveness and feasibility of our method is demonstrated with experiments on images acquired from a drone camera during flight.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call