Abstract

Visual Simultaneous Localization and Mapping (vSLAM) is one of the key technologies in the field of robotics. The assumption of scene static is typical in SLAM algorithms. Such a strong assumption limits the use of most vSLAM systems in populated real-world environments. Recently, the semantic vSLAM systems towards dynamic scenes have gradually attracted more and more attentions. Existing semantic vSLAM system usually solely simply combines semantic information and motion check to obtain dynamic target contours, and delete all feature points on the contour. This article propose a new framework to exclude feature points using a mask produced by probabilistic mesh. It proposes to use superpoint segmentation to divide the picture into probabilistic meshs, and use the feature point matching relationship of historical frames to propagating probability, Only use the feature points in the low probability mesh to stably estimate camera motion. Experiments conducted on TUM’s RGBD dataset [15] show that the average accuracy of the camera trajectory estimated by this method is 90% higher than that of the original ORB-SLAM2 [1], and it is also compared with other SLAM systems that can cope with dynamic environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.