Abstract

Numerous advanced simultaneous localization and mapping (SLAM) algorithms have been developed due to the scientific and technological advancements. However, their practical applicability in complex real-world scenarios is severely limited by the assumption that objects are stationary. Improving the accuracy and robustness of SLAM algorithms in dynamic environments is therefore of paramount importance. A significant amount of research has been conducted on SLAM in dynamic environments using semantic segmentation or object detection, but a major drawback of these approaches is that they may eliminate static feature points if the movable objects are static, or use dynamic feature points if the static objects are moved. This paper proposed DynaTM-SLAM, a robust semantic visual SLAM algorithm, designed for dynamic environments. DynaTM-SLAM combines object detection and template matching techniques with a sliding window to quickly and efficiently filter out the real dynamic feature points, drastically minimizing the impact of dynamic objects. Our approach uses object detection instead of time-consuming semantic segmentation to detect dynamic objects. In addition, an object database is built online and the camera poses, map points, and objects are jointly optimized by implementing semantic constraints on the static objects. This approach fully exploits the positive effect of the semantic information of static objects and refines the accuracy of ego-motion estimation in dynamic environments. Experiments were carried out on the TUM RGBD dataset, and the results demonstrate a significant improvement in performance in dynamic scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call