Abstract

Simultaneous localization and mapping (SLAM) is essential in numerous robotics applications such as autonomous navigation. Traditional SLAM approaches infer the metric state of the robot along with a metric map of the environment. While existing algorithms exhibit good results, they are still sensitive to measurement noise, sensors quality, data association and are still computationally expensive. Alternatively, we note that some navigation and mapping missions can be achieved using only qualitative geometric information, an approach known as qualitative spatial reasoning (QSR). In this work we contribute a novel probabilistic qualitative localization and mapping approach, which extends the state of the art by inferring also the qualitative state of the camera poses (localization), as well as incorporating probabilistic connections between views (in time and in space). Our method is in particular appealing in scenarios with a small number of salient landmarks and sparse landmark tracks. We evaluate our approach in simulation and in a real-world dataset, and show its superior performance and low complexity compared to state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call