Abstract

At present, visual simultaneous localization and mapping is a hot topic in the field of unmanned systems, which is popular among academic workers because of its advantages of accurate localization, low cost, large amount of information, and wide range of applications, but it still has some problems, including the camera’s vulnerability to the number of feature points and the noise impact of the inertial measurement unit during uniform linear motion. In response to the above problem this paper carries out the research on multi-sensor fusion localization algorithm, the main work is as follows: Based on ORB-SLAM3, a visual-inertial-laser SLAM algorithm is designed. The relative motion of laser location between image frames is obtained from the data of 2D Lidar and laser height sensor. The relative motion of inertial measurement unit between image frames is obtained from inertial measurement unit preintegration. Based on the method of factor graph optimization, the pose of image frame is optimized by reprojection of map point, relative motion increment of inertial measurement unit, and relative motion increment of laser location. The algorithm improves the localization accuracy by about 24.4% over the ORB-SLAM3 visual mode and about 22.6% over the ORB-SLAM3 visual-inertial mode on the data of the UAV physical platform.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call