Abstract

Maps are needed for a wide range of applications. In the context of mobile robotics, the map learning problem under uncertainty is often referred to as the simultaneous localization and mapping problem. In this paper, we aim at exploiting already available information such as OpenStreetMap data within the Simultaneous Localization and Mapping (SLAM) problem. We achieve this by relating the information about buildings with the perceptions of the robot and generate constraints for the pose graph-based formulation of the SLAM problem. In addition to that, we present a way to select target locations for the robot so that by going there, the robot can expect to reduce its own pose uncertainty. This localizability information is generated directly from OpenStreetMap data and supports active localization. We implemented and evaluated our approach using real-world data taken in urban environments. Our experiments suggest that we are able to relate the newly built maps with information from OpenStreetMap with the laser range finder data from the robot and in this way improve the map quality. The extension to graph-based SLAM provides better aligned maps and adds only a marginal computational overhead. Furthermore, we illustrate that the localizability information is useful to evaluate the ability to localize the robot given a trajectory.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call