Abstract

2D and 3D sensor extrinsic calibration is the key prerequisite for multi-sensor based robot perception and localization. However, such calibration is challenging due to the variety of sensor modalities and the requirement of special calibration targets and human intervention. In this paper, we demonstrate a new targetless cross-modal calibration system focusing on the extrinsic transformations among stereo cameras, thermal cameras, and laser sensors. Specifically, the calibration between stereo and laser is conducted in 3D space by minimizing the registration error, while the thermal extrinsic to the other two sensors is estimated by optimizing the alignment of the edge features. Due to the low contrast of thermal images, the extracted edges are often noisy, resulting in incorrect edge matching. We introduce the edge alignment optimization on attraction field map to overcome this challenge. Our method requires no dedicated targets and performs the multi-sensor calibration in a single shot without human interaction. Extensive experiments on our collected real-world datasets show that our system can be easily used in structured environments with high extrinsic calibration accuracy. The video demonstration can be found at https://www. youtube.com/watch?v=b1QAVY7hUu8. The code is released to the public at https://github.com/FuTaimeng/auto calibration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call