Abstract

The integration of the camera and LiDAR has played an important role in the field of autonomous driving, for example in visual–LiDAR SLAM and 3D environment fusion perception, which rely on precise geometrical extrinsic calibration. In this paper, we proposed a fully automatic end-to-end method based on the 3D–2D corresponding mask (CoMask) to directly estimate the extrinsic parameters with high precision. Simple subtraction was applied to extract the candidate point cluster from the complex background, and then 3D LiDAR points located on checkerboard were selected and refined by spatial growth clustering. Once the distance transform of 2D checkerboard mask was generated, the extrinsic calibration of the two sensors could be converted to 3D–2D mask correspondence alignment. A simple but efficient strategy combining the genetic algorithm with the Levenberg–Marquardt method was used to solve the optimization problem globally without any initial estimates. Both simulated and realistic experiments showed that the proposed method could obtain accurate results without manual intervention, special environment setups, or prior initial parameters. Compared with the state of the art, our method has obvious advantages in accuracy, robustness, and noise resistance. Our code is open-source on GitHub.

Highlights

  • The camera sensor has the advantages of low cost, rich texture, and high frame rate for environment perception, but it is limited by lighting conditions and shows difficulties in recovering accurate geometric information

  • We focused on extrinsic calibration of camera and LiDAR in this paper

  • A real LiDAR point cloud can be spoiled with large noise in the distance and reflectivity due to factors such as changes in object reflectivity, vibration of rotating machinery, noise characteristics of electronic components, and TOF time measurement error of laser pulse echo

Read more

Summary

Introduction

The camera sensor has the advantages of low cost, rich texture, and high frame rate for environment perception, but it is limited by lighting conditions and shows difficulties in recovering accurate geometric information. One of the ideas is to seek a set of 3D–3D or 3D–2D feature correspondences that may contain points, line segments, or planar-level objects Another consideration is to find correlation information between the image data and the laser point cloud, for example on luminosity and reflectivity, edge and range discontinuity, etc. The optimization function based on maximizing correlation information has a large number of local maxima; it can hardly converge to the correct result from a relatively rough initial estimate It cannot be applied when there is no prior knowledge or for applications that need precise and robust calibration, for example in dataset benchmark platforms and car manufacturing.

Related Works
System Overview
Automatic Checkerboard Detection
Three-Dimensional Checkerboard Point Cloud Extraction
Experiment
Simulated Experiment Setup
Results of the Simulated Experiment
Realistic Experiment Setup
Results of the Realistic Experiment
Realistic Results Vary with Observations
Analysis and Discussion
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.