Abstract

The visual SLAM in dynamic environment has been regarded as a fundamental task for robots. Currently, existing works achieve good performance in only indoor scenes due to the loss of depth information and scene complexity. In this paper, we present a semantic SLAM framework based on geometric constraint and deep learning models. Specifically, our method is built top on the ORB-SLAM2 system with stereo observation. First, the semantic feature and depth information are acquired respectively using different deep learning models. In this way, multiple views projection is generated to reduce the impact of moving objects for pose estimation. Under the hierarchical rule, the feature points are further refined for SLAM tracking via depth local contrast. Finally, multiple dense 3D maps are created for high-level robot navigation in an incremental updating manner. Our method on public KITTI dataset demonstrates that evaluation metrics of most of sequences improve and achieve state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call