Abstract

In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.