Abstract

Significant progress has been made in the field of visual SLAM (Simultaneous Localization and Mapping). However, the localization accuracy of visual SLAM can be significantly reduced in low-texture and illumination-changing environments. To solve these problems, an enhanced visual SLAM algorithm based on VGG (Visual Geometry Group) network was proposed in this paper. Firstly, the VGG network for feature point extraction was incorporated into the visual odometry (VO) to achieve robust camera pose estimation. Secondly, an automatic corner annotation method was adopted to set up the training database, which could reduce the workload of data annotation. Thirdly, to make the backend optimization more suitable for the VGG based VO, the BA (bundle adjustment) optimization process was improved. Experimental results showed that, the proposed visual SLAM method outperformed the mainstream ORB-SLAM (Oriented FAST and Rotated BRIEF SLAM) method, in terms of the number of effective feature points, the robustness of feature matching process to light changes, and the accuracy of robot pose estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call