Abstract
This paper deals with the problem of Simultaneous Localization and Mapping (SLAM). The solution presented is based on the utilisation of a set of images to represent the environment. In this way, the estimation of the map considers the computation of the position and orientation of a set of omnidirectional views captured from the environment. The proposed idea sets apart from the usual representation of a visual map, in which the environment is represented by a set of three dimensional points in a common reference system. Each of these points is commonly denoted as visual landmark. In the case presented here, the robot is equipped by a single omnidirectional visual sensor that allows to extract a number of interest points in the images, each one described by a visual descriptor. The map building process can be summed up in the following way: as the robot traverses the environment, it captures omnidirectional images and extracts a set of interest points from each one. Next, a set of correspondences is found between the current image and the rest of omnidirectional images existing in the map. When the number of correspondences found is enough, a transformation is computed, consisting of a rotation and a translation (up to an unknown scale factor). In the paper we show a method that allows to build a map while localizing the robot using these kind of observations. We present results obtained in a simulated environment that validate the proposed idea. In addition, we present experimental results using real data that prove the suitability of the solution.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have