Abstract

There are two main challenges, drift and scale ambiguity, restricting monocular visual odometry from an extensive application on real autonomous navigation. In this paper, an iterative localization framework is presented to globally localize a mobile vehicle equipped with a single camera and a freely available digital map. Inspired by the concept of cloud, a new Gaussian–Gaussian Cloud model is proposed to give a unified representation of the measurement randomness and scale ambiguity in monocular visual odometry. In this model, a collection of cloud drops is generated. Both the drift and scale ambiguity are considered and represented simultaneously in each cloud drop. To reduce the measurement uncertainties of any drop in Gaussian–Gaussian Cloud, road constraints from the open source map—OpenStreetMap—are utilized. The map is first converted to a template edge map and a shape matching step is then implemented to assign the probability of each cloud drop, indicating to what degree the drop accords with road constraints. A parameter estimation scheme is used to narrow down the scale ambiguity of monocular visual odometry while resampling cloud drops. Evaluations on the KITTI benchmark data set and our self-collected data set have demonstrated the stability and accuracy of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call