Abstract

Visual Simultaneous Localization and Mapping (SLAM) is one of the key technologies for intelligent mobile robots. However, most of the existing SLAM algorithms have low localization accuracy in dynamic scenes. Therefore, a visual SLAM algorithm combining semantic segmentation and motion consistency detection is proposed. Firstly, the RGB images are segmented by SegNet network, the prior semantic information is established and the feature points of high-dynamic objects are removed; Secondly, motion consistency detection is carried out, the fundamental matrix is calculated by the improved Random Sample Consistency (RANSAC) algorithm, the abnormal feature points are output by the epipolar geometry method, and the feature points of low-dynamic objects are eliminated by combining the prior semantic information. Thirdly, the static feature points are used for pose estimation. Finally, the proposed algorithm is tested on the TUM dataset, the algorithm in this paper reduces the average RMSE of ORB-SLAM2 by 93.99% in highly dynamic scenes, which show that the algorithm can effectively improve the localization accuracy of the visual SLAM system in dynamic scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call