Abstract

In this paper we present a novel approach for generating viewpoint invariant features from single images and demonstrate its application to robust matching over widely separated views in urban environments. Our approach exploits the fact that many man-made environments contain a large number of parallel linear features along several principal directions. We identify the projections of these parallel lines to recover a number of dominant scene planes and subsequently compute viewpoint invariant features within the rectified views of these planes. We present a set of comprehensive experiments to evaluate the performance of the proposed viewpoint invariant features. It is demonstrated that: (1) the resulting feature descriptors become more distinctive and more robust to camera viewpoint changes after the procedure of 3D viewpoint normalization; and (2) the features provide robust local feature information including patch scale and dominant orientation which can be effectively used to provide geometric constraints between views. Targeted at applications in urban environments, where many repetitive structures exist, we further propose an effective framework to use this novel feature for the challenging wide baseline matching tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.