Abstract

Simultaneous localization and mapping (SLAM) is one of the core technologies for intelligent mobile robots. However, when robots perform VSLAM in dynamic scenes, dynamic objects can reduce the accuracy of mapping and localization. If deep learning-based semantic information is introduced into the SLAM system to eliminate the influence of dynamic objects, it will require high computing costs. To address this issue, this paper proposes a method called YF-SLAM, which is based on a lightweight object detection network called YOLO-Fastest and tightly coupled with depth geometry to remove dynamic feature points. This method can quickly identify the dynamic target area in a dynamic scene and then use depth geometry constraints to filter out dynamic feature points, thereby optimizing the VSLAM positioning performance while ensuring real-time and efficient operation of the system. This paper evaluates the proposed method on the publicly available TUM dataset and a self-made indoor dataset. Compared with ORB-SLAM2, the root-mean-square error of the Absolute Trajectory Error (ATE) can be reduced by 98.27%. The system successfully locates and constructs an accurate environmental map in a real indoor dynamic environment using a mobile robot. It is a VSLAM system that can run in real-time on low-power embedded platforms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.