Abstract

In dynamic scenes, moving objects will cause a significant error accumulation in robot’s pose estimation, and might even lead to tracking loss. In view of these problems, this paper proposes a semantic visual simultaneous localization and mapping algorithm based on YOLOv7. First, a light-weight network YOLOv7 is employed to acquire the semantic information of different objects in the scene, and flood filling and edge-enhanced techniques are combined to accurately and quickly separate the dynamic feature points from the extracted feature point set. In this way, the obtained static feature points with high-confidence are used to achieve the accurate estimation of robot’s pose. Then, according to the semantic information of YOLOv7, the motion magnitude of the robot, and the number of dynamic feature points in camera’s field-of-view, a high-performance keyframe selection strategy is constructed. On this basis, a robust loop closure detection method is developed by introducing the semantic information into the bag-of-words model, and global bundle adjustment optimization is performed on all keyframes and map points to obtain a global consistent pose graph. Finally, YOLOv7 is further utilized to carry out semantic segmentation on the keyframes, remove the dynamic objects in its semantic mask, and combine the point cloud pre-processing and octree map to build a 3D navigation semantic map. A series of simulations on TUM dataset and a case study in real scene clearly demonstrated the performance superiority of the proposed algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call