Abstract

We present an algorithm for finding robust matches between images by considering the spatial constraints between pairs of interest points. By considering these constraints, we account for the layout and structure of features during matching, which produces more robust matches compared to the common approach of using local feature appearance for matching alone. We calculate the similarity between interest point pairs based on a set of spatial constraints. Matches are then found by searching for pairs which satisfy these constraints in a similarity space. Our results show that the algorithm produces more robust matches compared to baseline SIFT matching and spectral graph matching, with correspondence ratios up to 33% and 28% higher (respectively) across various viewpoints of the test objects while the computational load is only increased by about 25% over baseline SIFT. The algorithm may also be used with other feature descriptors apart from SIFT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call