Abstract

By increasing the requirement of reliable and accurate sensor information, the integration of multiple sensors has gained attention. Especially, the fusion of a LIDAR(Light Detection And Ranging) and a camera is one of the sensor combination broadly used because it provides the complementary and redundant information. Many existing calibration approaches consider the problem estimating the relative pose between each sensor pair such as a LIDAR and a camera. However, these approaches do not provide accurate solutions for multisensor configurations such as a LIDAR and cameras or LIDARs and cameras. In this paper, we propose a new extrinsic calibration algorithm using closed-loop constraints for multi-modal sensor configuration. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. We conduct several experiments to evaluate the performance of our approach, such as comparison of the RMS distance of the ground truth and the projected points, and comparison between the independent sensor pair and our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call