Abstract

Emergent fields such as Internet of Things applications, driverless cars, and indoor mobile robots have brought about an increasing demand for simultaneous localization and mapping (SLAM) technology. In this study, we design a SLAM scheme called BVLI-SLAM based on binocular vision, 2D lidar, and an inertial measurement unit (IMU) sensor. The pose estimation provided by vision and the IMU can provide better initial values for the 2D lidar mapping algorithm and improve the mapping effect. Lidar can also assist vision to provide better plane and yaw angle constraints in weak texture areas and obtain higher precision 6-degree of freedom pose. BVLI-SLAM uses graph optimization to fuse the data of the IMU, binocular camera, and laser. The IMU pre-integration combines the visual reprojection error and the laser matching error to form an error equation, which is processed by a sliding window-based bundle adjustment optimization to calculate the pose in real time. Outdoor experiments based on KITTI datasets and indoor experiments based on the trolley mobile measurement platform show that BVLI-SLAM has different degrees of improvement in mapping effect, positioning accuracy, and robustness compared with VINS-Fusion and Cartographer, and can solve the problem of positioning and plane mapping in indoor complex scenes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.