Abstract

Limited by the requirements of mounting height and occlusion avoidance in adjacent areas, the configuration of multiunit light detection and ranging (LiDAR) sensors rather than a single omni-directional LiDAR has gradually become the first choice for low cost and mass-produced unmanned ground vehicles. However, accurate calibration of multiunit LiDARs is challenging since the conjugate features in scan points cannot always provide enough constraints, which is exacerbated by narrow overlapping field of view. To address this problem, we propose a novel automatic extrinsic calibration method for multi-beam LiDARs without any special targets. Specifically, we decompose the extrinsic parameter estimation into two parts. One is to extract the ground plane equations for each LiDAR and establish a loss function based on the corresponding ground normal to estimate the z, roll, and pitch offsets relative to the ground. The second is to convert the 3D horizontal-adjusted points to 2D occupancy grid map (OGM) and the distance transform (DT) of it. By projecting the overlapped points to the DT field, an energy loss function is constructed to get x, y and yaw offsets between the multi-LiDARs. Both simulated and realistic experiments show that our method does not require manual intervention and specific objects to complete the automatic extrinsic calibration. Compared with the existing methods, ours has obvious advantages in accuracy, noise resistance and stability to initial parameters. Our code is open sourced on github.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call