Abstract

Visual-based simultaneous localization and mapping (SLAM) systems perform weakly in object tracking and map reconstruction due to the unreliable depth measurement originating from image-only data. Light Detection and Ranging (LiDAR) can be coupled to overcome the drawback of uncertain depth estimation. The prerequisite for performing data fusion is to align visual-Lidar sensors to a specific coordinate system with extrinsic pose by calibrating. The conventional extrinsic calibration frameworks either rely on markers in artificial large-size calibration boards or uncontrollable natural scenes (Fig. 2 ), limiting stability and convenience. In this paper, we have designed a novel marker pattern, A4LidarTag, composed of circular holes. The difference in depth measurement is used to encode location information. Based on A4LidarTag, the automatic extrinsic calibration framework between solid-state Lidar (SSL) and the camera is developed. The proposed framework can be implemented in close range (within 1 m) and on an A4-size calibration board. The average reprojection error resulting from Lidar point clouds projection is about 0.12pixels. Experiments show excellent efficiency and versatility in both indoor and outdoor scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.