Abstract

In this paper, we present a complete framework for autonomous indoor robot navigation. We show that autonomous navigation is possible in indoor situation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the control guides the robot along the reference visual route without explicitly planning any trajectory. The control consists on a vision-based control law adapted to the nonholonomic constraint. The proposed framework has been designed for a generic class of cameras (including conventional, catadioptric and fish-eye cameras). Experiments with a AT3 Pioneer robot navigating in an indoor environment have been carried on with a fisheye camera. Results validate our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call