Abstract
The visual SLAM in dynamic environment has been regarded as a fundamental task for robots. Currently, existing works achieve good performance in only indoor scenes due to the loss of depth information and scene complexity. In this paper, we present a semantic SLAM framework based on geometric constraint and deep learning models. Specifically, our method is built top on the ORB-SLAM2 system with stereo observation. First, the semantic feature and depth information are acquired respectively using different deep learning models. In this way, multiple views projection is generated to reduce the impact of moving objects for pose estimation. Under the hierarchical rule, the feature points are further refined for SLAM tracking via depth local contrast. Finally, multiple dense 3D maps are created for high-level robot navigation in an incremental updating manner. Our method on public KITTI dataset demonstrates that evaluation metrics of most of sequences improve and achieve state-of-the-art performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.