Abstract

This paper addresses the problem of absolute visual ego-localization of an autonomous vehicle equipped with a monocular camera that has to navigate in an urban environment. The proposed method is based on a combination of: 1) a Hidden Markov Model (HMM) exploiting the spatio-temporal coherency of acquired images and 2) learnt metrics dedicated to robust visual localization in complex scenes, such as streets. The HMM merges odometric measurements and visual similarities computed from specific (local) metrics learnt for each image of the database. To achieve this goal, we define some constraints so that the distance between a database image and a query image representing the same scene is smaller than the distance between this query image and other neighbor images of the database. Successful experiments, conducted using a freely available geo-referenced image database, reveal that the proposed method significantly improves results: the mean localization error is reduced from 12.9m to 3.9m over a 11km path.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.