Abstract

A recent study (Int. J. Comput. Vis. 73(3), 263–284, 2007) has shown that none of the detector/descriptor combinations perform well when the camera viewpoint is changed with more than 25–30°. In this paper we introduce an efficient two-step method that increases significantly the number of correct matches of wide separated views of a given 3D scenes. First, a few kernel correspondences are identified in the images and then, based on their neighbor information, the geometric distortion that relates the surrounding regions of these seed keypoints is estimated iteratively. Next, based on these estimated parameters combined with a rough segmentation that reduces the searching space of the keypoint descriptors, the neighbor regions around every keypoint are warped accordingly. In our experiments the method has been tested extensively, yielding promising results over a wide range of viewpoints of known 3D models images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call