Abstract

Given the lack of scale information of the image features detected by the visual SLAM (simultaneous localization and mapping) algorithm, the accumulation of many features lacking depth information will cause scale blur, which will lead to degradation and tracking failure. In this paper, we introduce the lidar point cloud to provide additional depth information for the image features in estimating ego-motion to assist visual SLAM. To enhance the stability of the pose estimation, the front-end of visual SLAM based on nonlinear optimization is improved. The pole error is introduced in the pose estimation between frames, and the residuals are calculated according to whether the feature points have depth information. The residuals of features reconstruct the objective function and iteratively solve the robot’s pose. A keyframe-based method is used to optimize the pose locally in reducing the complexity of the optimization problem. The experimental results show that the improved algorithm achieves better results in the KITTI dataset and outdoor scenes. Compared with the pure visual SLAM algorithm, the trajectory error of the mobile robot is reduced by 52.7%. The LV-SLAM algorithm proposed in this paper has good adaptability and robust stability in different environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.