Abstract

Simultaneous Localization and Mapping (SLAM) has traditionally relied on representing the environment as low-level, geometric features, such as points, lines, and planes. Recent advances in object recognition capabilities, however, as well as demand for environment representations that facilitate higher-level autonomy, have motivated an object-based Semantic SLAM. We present a Semantic SLAM algorithm that directly incorporates a sparse representation of objects into a factor-graph SLAM optimization, resulting in a system that is efficient, robust to varying object shapes and environments, and easy to incorporate into an existing SLAM pipeline. Our keypoint-based representation facilitates robust detection in varying conditions and intraclass shape variation, as well as computational efficiency. We demonstrate the performance of our algorithm in two different SLAM systems and in varying environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call