Abstract

Due to the difference of data modalities, it’s a very challenging task to find the feature correspondences between 2D and 3D data in LiDAR-Camera calibration. In existing works, the establishment of the cross-model correspondence is always simplified by specifically designing artificial targets or restricting the region of searching correspondences with the help of initial extrinsic parameters. To achieve automatic LiDAR-Camera calibration without prior knowledge, we propose a novel self-adaptive LiDAR-Camera calibration approach named <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">ATOP</b> which realizes a cascaded procedure of ATtention-to-OPtimization. In the attention stage, an attention-based object-level matching network called Cross-Modal Matching Network (CMON) is designed for finding the overlapped FOV(Field of View) between camera and LiDAR, and producing 2D-3D object-level correspondences. In the optimization stage, two cascaded PSO-based (Particle Swarm Optimization) algorithms, namely <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Point</i> -PSO and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Pose</i> -PSO, are designed to estimate the LiDAR-Camera extrinsic parameters. Different from previous works, the proposed calibration method does not require any artificial targets or initial pose guesses, therefore it can be applied to achieve online self-adaptive LiDAR-Camera calibration. Besides, this is the first work, to our best knowledge, to achieve object-level matching between uncalibrated camera and LiDAR data. Experimental results on both the collected datasets and KITTI datasets demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.