Abstract

Aiming at the problems of feature point calibration method of 3D light detection and ranging (LiDAR) and camera calibration that are calibration boards in various forms, incomplete information extraction methods and large calibration errors, a novel calibration board with local gradient depth information and main plane square corner information (BWDC) was designed. In addition, the "three-step fitting interpolation method" was proposed to select feature points and obtain the corresponding coordinates of feature points in the LiDAR coordinate system and camera pixel coordinate system based on BWDC. Finally, calibration experiments were carried out, and the calibration results were verified by methods such as incremental verification and reprojection error comparison. The calibration results show that using BWDC and the "three-step fitting interpolation method" can solve quite accurate coordinate transformation matrix and intrinsic and external parameters of sensors, which dynamically change within 0.2% in the repeatable experiments. The difference between the experimental value and the actual value in the incremental verification experiment is about 0.5%. The average reprojection error is 1.8312 pixels, and the value changes at different distances do not exceed 0.1 pixels, which also show that the calibration method is accurate and stable.

Highlights

  • Heterogeneous multi-sensor data fusion has extensive research and applications in mobile robots [1], driverless cars [2], and other fields

  • The "three-step fitting interpolation method" mainly includes: selecting feature points and fitting to find their coordinates in the light detection and ranging (LiDAR) coordinate system, calculating the coordinates of the feature points in calibration board coordinate system, and interpolating the feature points in camera pixel coordinates

  • The following two conditions should be satisfied: (a) If the horizontal lateral resolution of the LiDAR is θ, the calibration board should be placed at a distance that meets d ≤ L/3θ. (b) Theoretical analysis and experimental verification show that corner point detection is more robust when a checkerboard grid is imaged with a side length of at least three pixels

Read more

Summary

Introduction

Heterogeneous multi-sensor data fusion has extensive research and applications in mobile robots [1], driverless cars [2], and other fields. The feature point method directly obtains the transformation relationship between LiDAR coordinate system and the camera pixel coordinate system by selecting the feature points and obtaining their coordinates in the above two systems based on the calibration board, calculating the transformation matrix and the intrinsic and external parameters [15] by solving the calibration matrix conversion equation [16] or using method like supervised learning [17], etc. The calibration board has gradient depth information, plane corner information, and position information, which is easy to experiment and adjust Since another factor that affects the calibration accuracy is the calibration method, this paper proposed the "three-step fitting interpolation method" for selecting feature points and accurately obtaining the coordinates of the feature points in the LiDAR coordinate system and camera pixel coordinate system.

Basic Method of 3D LiDAR and Camera Feature Point Calibration
Calibration
Calibration Method
Feature Points Selection and LiDAR Coordinates Calculation
Calibration Process
Calibration Conditions
Experiments and Analysis
Calibration Experiments
Experiments
Incremental Verification Experiments
Translation Incremental Verification
Rotation Increment Verification
Reprojection Error Evaluation
Conclusions and Future Work
Findings
Patents
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call