Abstract

With the development of intelligent concepts in various fields, research on driverless and intelligent industrial robots has increased. Vision-based simultaneous localization and mapping (SLAM) is a widely used technique. Most conventional visual SLAM algorithms are assumed to work in ideal static environments; however, such environments rarely exist in real life. Thus, it is important to develop visual SLAM algorithms that can determine their own positions and perceive the environment in real dynamic environments. This paper proposes a lightweight robust dynamic SLAM system based on a novel semantic segmentation network (LRD-SLAM). In the proposed system, a fast deep convolutional neural network (FNet) is implemented into ORB-SLAM2 as a semantic segmentation thread. In addition, a multiview geometry method is introduced, in which the accuracy of detecting dynamic points is further improved through the difference in parallax angle and depth, and the information of the keyframes is used to repair the static background information absent from the removal of dynamic objects, to facilitate the subsequent reconstruction of the point cloud map. Experimental results obtained using the TUM RGB-D dataset demonstrate that the proposed system improves the positioning accuracy and robustness of visual SLAM in indoor pedestrian dynamic environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call