Abstract

Visual-based simultaneous localization and mapping (SLAM) systems perform weakly in object tracking and map reconstruction due to the unreliable depth measurement originating from image-only data. Light Detection and Ranging (LiDAR) can be coupled to overcome the drawback of uncertain depth estimation. The prerequisite for performing data fusion is to align visual-Lidar sensors to a specific coordinate system with extrinsic pose by calibrating. The conventional extrinsic calibration frameworks either rely on markers in artificial large-size calibration boards or uncontrollable natural scenes (Fig. 2 ), limiting stability and convenience. In this paper, we have designed a novel marker pattern, A4LidarTag, composed of circular holes. The difference in depth measurement is used to encode location information. Based on A4LidarTag, the automatic extrinsic calibration framework between solid-state Lidar (SSL) and the camera is developed. The proposed framework can be implemented in close range (within 1 m) and on an A4-size calibration board. The average reprojection error resulting from Lidar point clouds projection is about 0.12pixels. Experiments show excellent efficiency and versatility in both indoor and outdoor scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call