Abstract
A robot needs the ability of Simultaneous Localization and Mapping(SLAM) in an unknown environment. It can help the robot navigation and tracking. The traditional method only considers that the robot works in a static environment. But in most cases, the robot should work in a highly dynamic environment like people. This paper presents a novel SLAM system named FD-SLAM, building on ORB-SLAM2 [1], which adds the abilities of dynamic object detection and fusion of depth image and semantic image to better estimate the position of people in an image. To improve the real-time performance of FD-SLAM, it has five threads that run simultaneously in FD-SLAM: tracking, semantic segmentation, local mapping, loop closing, and point cloud mapping. FD-SLAM uses the deep neural network algorithm to detect the dynamic object and combine the depth image with the corresponding semantic image to obtain a more accurate position of a dynamic object thus reducing the impact of dynamic objects to more accurately estimate the pose of the camera. Finally, the point cloud thread uses each keyframe to create a point cloud map. We experiment with a public dataset. The results show that the absolute trajectory error of FD-SLAM is smaller than that of ORB-SLAM and DS-SLAM, and it has better real-time performance than other SLAM systems for a dynamic environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.