Abstract

This work focuses on finding the extrinsic parameters (rotation and translation) between Lidar and an RGB camera sensor. We use a planar checkerboard and place it inside the Field-of-View (FOV) of both sensors, where we extract the 3D plane information of the checkerboard acquired from the sensor’s data. The plane coefficients extracted from the sensor’s data are used to construct a well-structured set of 3D points. These 3D points are then ’aligned,’ which gives the relative transformation between the two sensors. We use our proposed Correntropy Similarity Matrix Iterative Closest Point (CoSMICP) Algorithm to estimate the relative transformation. This work uses a single frame of the point cloud data acquired from the Lidar sensor and a single frame from the calibrated camera data to perform this operation. From the camera image, we use the projection of the calibration target’s corner points to compute the 3D points, and along the process, we calculate the 3D plane equation using the corner points. We evaluate our approach on a simulated dataset with complex environment settings, making use of the freedom to assess under multiple configurations. Through the obtained results, we verify our method under various configurations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call