Abstract

Visual Simultaneous Location and Mapping (SLAM) based on RGB-D has developed as a fundamental capability for intelligent mobile robot. However, most of existing SLAM algorithms assume that the environment is static and not suitable for dynamic environments. This is because moving objects in dynamic environments can interfere with camera pose tracking, cause undesired objects to be integrated into the map. In this paper, we modify the existing framework for RGB-D SLAM in dynamic environments, which reduces the influence of moving objects and reconstructs the background. The method starts by semantic segmentation and motion points detection, then removing feature points on moving objects. Meanwhile, a clean and accurate semantic map is produced, which contains semantic information maintenance part. Quantitative experiments using TUM RGB-D dataset are conducted. The results show that the absolute trajectory accuracy and real-time performance in dynamic scenes can be improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call