Abstract

Vision simultaneous localization and mapping (SLAM) technology has become a key research direction in the field of mobile robotics in recent years. However, the accuracy and stability of traditional vision SLAM technology greatly affected by the dynamic environment, and the mainstream dynamic feature point rejection method combining vision and semantic segmentation techniques is not applicable to edge-end devices with limited resources and high real-time requirements. This research suggests a visual SLAM algorithm based on the YOLOv10n lightweight target detection model and the GCNv2 feature point extraction model to accomplish real-time dynamic feature point rejection to address those problems. To compensate for the detection accuracy and stability issues of the YOLOv10n model while leveraging its real-time advantages, the algorithm also employs multi-target Kalman filtering with data association through the Hungarian algorithm, history window smoothing, and potential dynamic feature point recording methods to enhance robustness. The algorithm is validated on the TUM RGB-D Dataset, and the results demonstrate that the method can effectively reject the dynamic feature points in the dynamic environment, and it has a significant improvement in the accuracy and stability of the visual SLAM system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.