Abstract

In indoor dynamic scenes, traditional visual simultaneous localization and mapping (SLAM) algorithms based on RGB-D cameras incorrectly use dynamic features to estimate the poses of the cameras, and do not fully utilize the geometric information in the scenes, resulting in the low positioning accuracy and robustness of SLAM systems. To solve this problem, this paper proposes an RGB-D SLAM algorithm based on multiple geometric features and semantic segmentation. The core of our SLAM system is the proposed robust exclusion method of dynamic point and line features. The method consists of the following three steps: (a) identify potential dynamic point features using motion consistency checks; (b) obtain potential motion regions via semantic segmentation, and then determine dynamic regions by combining these with dynamic point features; (c) remove point and line features in dynamic regions. The exclusion method of dynamic point and line features can be easily integrated into RGB-D SLAM systems for improving the accuracy and robustness of SLAM systems in dynamic scenes. Experimental results on the Technische Universität München (TUM) dataset demonstrate that the proposed algorithm has better positioning accuracy and stability than the original dynamic semantic (DS-SLAM) algorithm in dynamic environments. The effectiveness of the proposed algorithm is verified by comparison with other classical visual SLAM algorithms. Better mapping performance is achieved by this proposed algorithm in actual indoor scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call