Abstract

In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.

Highlights

  • Academic Editor: Gregorij KurilloIn recent years, with the increasing demands on perception performance in the fields of mobile robots [1–3], surveying and mapping [4–8], 3D reconstruction [9–12], and autonomous driving [13,14], the application of multi-sensor fusion technology is more and more extensive [15–17]

  • Aiming at the problem of external parameter calibration of LIDAR and multi-cameras, this paper proposes an automatic external parameter calibration method based on graph optimization

  • The data collected by two different types of sensors are different and it is difficult to match the corresponding feature points, we use the reflectivity information of LIDAR to lock and build a virtual calibration board and base it on the virtual calibration board to establish the initial value of the optimization problem

Read more

Summary

Introduction

With the increasing demands on perception performance in the fields of mobile robots [1–3], surveying and mapping [4–8], 3D reconstruction [9–12], and autonomous driving [13,14], the application of multi-sensor fusion technology is more and more extensive [15–17]. LIDAR and cameras perform well in multi-sensor fusion technology [18–21]. Vision-based perception systems have been widely used in autonomous driving with the advantages of low cost and high performance [22–24]. The single-vision perception system cannot provide the accurate 3D information necessary for autonomous driving when acquiring 3D information [25], and the pure vision solution requires high computational cost when acquiring 3D information, and is affected by occlusion, illumination instability, and more serious object surface texture [26].

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call