Abstract

In order to make up for the shortcomings of independent sensors and provide more reliable estimation, a multi-sensor fusion framework for simultaneous localization and mapping is proposed in this paper. Firstly, the light detection and ranging (LiDAR) point cloud is screened in the front-end processing to eliminate abnormal points and improve the positioning and mapping accuracy. Secondly, for the problem of false detection when the LiDAR is surrounded by repeated structures, the intensity value of the laser point cloud is used as the screening condition to screen out robust visual features with high distance confidence, for the purpose of softening. Then, the initial factor, registration factor, inertial measurement units (IMU) factor and loop factor are inserted into the factor graph. A factor graph optimization algorithm based on a Bayesian tree is used for incremental optimization estimation to realize the data fusion. The algorithm was tested in campus and real road environments. The experimental results show that the proposed algorithm can realize state estimation and map construction with high accuracy and strong robustness.

Highlights

  • At present, the simultaneous localization and mapping (SLAM) algorithm plays an important role in mobile robot navigation

  • Suppose that the starting time of the light detection and ranging (LiDAR)’s i-th scan is ti, scanned point clouds are marked as Fi, and the data collected by inertial measurement units (IMU) in [ti, tj] is recorded as I(i,j); the motion state of the system xi can be expressed as xi = RTi, PiT, ViT, biT T

  • (1) In order to improve the accuracy of the positioning and mapping, the point cloud scanned by LiDAR is processed, and the irrelevant point clouds are eliminated by the following rules

Read more

Summary

Introduction

The simultaneous localization and mapping (SLAM) algorithm plays an important role in mobile robot navigation. The feature extraction method in LiDAR front-end processing is mainly inspired by lightweight and ground-optimized LiDAR odometry and mapping (LEGOLOAM). It divides the point cloud of LiDAR into ground points and segmentation points. Visual-LiDAR odometry and mapping (VLOAM) integrates the monocular camera and LiDAR information to build graphs, but does not add closed-loop detection. (3) A close-coupled LiDAR vision inertial system based on factor graph optimization is proposed in order to realize multi-sensor information fusion and construct a threedimensional point cloud map with high robustness in real-time. 2021, 12, 261 optimization, front-end processing of the vision and LiDAR, IMU pre-integration, odometry of fusion vision IMU LiDAR, map construction, closed-loop detection and factor graph optimization.

Previous Work
State Estimation Based on Factor Graph Optimization
LiDAR Front End
School Environment Test
Real Urban Road Environment Test
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.