Abstract
3D LiDAR and 2D camera are widely used in environment perception tasks in autonomous driving and robot navigation. The prerequisite for multi-sensor data fusion is to calibrate between sensors. In the calibration work of 3D LiDAR and 2D camera, most of the existing calibration techniques require a lot of manual work and complex calibration environment settings. How to improve the automation of calibration tasks and the applicability of dynamic environments is a valuable research topic. In this paper, we proposed a novel online calibration network (CALNet) to Automaticlly infer the 6 degrees of freedom (DOF) rigid body transformation between 3D LiDAR and 2D camera. CALNet not only adopts the attention mechanism to selectively extract features from RGB image and point cloud depth map, but also achieves high precision and robustness of the calibration network through a hybrid spatial pyramid pooling (HSPP) and a liquid time constant network (LTC). In the training process, besides using the L2 loss of the network to predict extrinsic calibration parameters as a supervised signal, we also considered the geometric transformation distance loss of the 3D point cloud. Extensive experiments have demonstrated that the performance of CALNet is better than the state-of-the-art DL-based methods. The code will be publicly available at https://github.com/XD319328/CALNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.