Abstract

To solve the problem that there is few invariant features, which can be extracted from both images, to be matched for large changes of view, an efficient invariant image matching approach is presented. The proposed approach consists of two main steps. In the first step, we use the multi-resolution strategy to detect maximally stable extremal regions (MSERs) and obtain the geometric transformation between each pair of all corresponding regions of the two images. In the second step, using those transformations, we warp each ellipse region in the other image to another pose in which the region looks similar to the ellipse region in the reference image, and use the scale invariant difference-of-Gaussian (DoG) detector, the scale-invariant feature transform (SIFT) feature descriptor and the nearest/next distance ratios metric to obtain the initial matching results. To eliminate the false pairs of the matching results, the random sample consensus (RANSAC) algorithm with epipolar constraint is used. Experimental results are provided to illustrate the performance of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.