The data fusion of a 3-D light detection and ranging (LIDAR) point cloud and a camera image during the creation of a 3-D map is important because it enables more efficient object classification by autonomous mobile robots and facilitates the construction of a fine 3-D model. The principle behind data fusion is the accurate estimation of the LIDAR-camera's external parameters through extrinsic calibration. Although several studies have proposed the use of multiple calibration targets or poses for precise extrinsic calibration, no study has clearly defined the relationship between the target positions and the data fusion accuracy. Here, we strictly investigated the effects of the deployment of calibration targets on data fusion and proposed the key factors to consider in the deployment of the targets in extrinsic calibration. Thereafter, we applied a probability method to perform a global and robust sampling of the camera external parameters. Subsequently, we proposed an evaluation method for the parameters, which utilizes the color ratio of the 3-D colored point cloud map. The derived probability density confirmed the good performance of the deployment method in estimating the camera external parameters. Additionally, the evaluation quantitatively confirmed the effectiveness of our deployments of the calibration targets in achieving high-accuracy data fusion compared with the results obtained using the previous methods.
Read full abstract