Abstract

AbstractWith the extensive research and application of various unmanned systems, for instance, unmanned aerial vehicle and self-driving cars, people are increasingly aware of the importance of multi-sensor data fusion. In this paper, a novel calibration method and corresponding experimental setup are proposed to accurately estimate the extrinsic parameters between LiDAR and camera. Proposed method introduces the reflection intensity to extract LiDAR line features on a self-developed calibration board. Intersections of LiDAR point clouds are calculated with the extracted LiDAR line features and intersections of visual lines are derived from ArUco Marker. Therefore multiple intersection correspondences between LiDAR frame and camera frame are given to calculate extrinsic parameters. We proved through experiments using a binocular camera with known intrinsic parameters. The results show that extrinsic calibration errors are within 0.7\(^{\circ }\) over rotation and within 1 cm over translation. The method is compared to the state-of-the-art method, the accuracy and convergence improve to about 3 times.KeywordsExtrinsic calibrationLiDARCameraReflection intensity

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.