Abstract

The composition of multiple-layer Light Detection And Ranging (LiDAR) and camera is commonly used in autonomous perception systems. Complementary information of these sensors is instrumental in the reliable surrounding perception. However, it is a difficult work for obtaining the extrinsic parameters between LiDAR and camera, which must be known for some perception algorithms. In this study, we present a method, using only three 3D-2D correspondences to compute the extrinsic parameters between Velodyne-VLP16 LiDAR and monocular camera. The procedure is that 3D and 2D features are extracted respectively from the point cloud and image of a custom calibration target and then the extrinsic parameters are obtained based on these features by the perspective-3-point (P3P) algorithm. Outliers with minimum energy at geometrical discontinuities of target are used as control points for extracting key features in LiDAR point cloud. Moreover, a novel method is presented to distinguish the correct solution from multiple P3P solutions. The method depends on conic shape discrepancies in spaces of the different solutions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call