Simultaneous Localization and Mapping (SLAM) is a cornerstone capability for intelligent mobile robots, enabling them to accurately estimate their positions in unknown environments. However, most of the state-of-the-art visual SLAM systems rely on the assumption of static scenes, leading to significantly reduced accuracy and robustness in dynamic environments. In this paper, a novel RGB-D SLAM system termed Inpainting SLAM is proposed in the ORB-SLAM2 framework. Our Inpainting SLAM defines two new modules: one is the dynamic objects detection module, which combines segmentation and depth information to segment dynamic objects. Additionally, a new method is also proposed to determine whether movable objects are classified as dynamic. The other is an image inpainting module to restore static regions that are occluded by dynamic objects, with a new rectified approach introduced to determine the inpainting regions that can enhance the performance of the SLAM system. With these two modules, the accuracy and robustness of the SLAM system in dynamic scenes are expected to be improved. Our method is tested on the public TUM dataset, demonstrating its effectiveness and reliability. The improvements on ORB-SLAM2 in RTE, RRE, and ATE are 97.45%, 99.88%, and 97.90%, respectively. In comparison with other advanced dynamic SLAM methods, our approach also demonstrates superiority.
Read full abstract