Abstract

The autonomous vehicles are required to perceive the environment to take a correct driving decision. The sensors which have been commonly used by autonomous vehicles are the camera and the Light Detection and Ranging (LiDAR). In this work, we have integrated the LiDAR data with the image captured by the camera to assign the color information to the point cloud which resulted in a 3D model and to assign depth information to the image pixels which resulted in a depth map. The LiDAR data is sparse and the resolution of the image is much greater than that of the LiDAR data. In order to match the resolution of the LiDAR data and image data, we had utilized Gaussian Process Regression (GPR) to interpolate the depth map but it was not able to completely fill the empty locations in the depth map. In this paper, we have proposed a method to interpolate the 2D depth map data to completely fill the empty locations in the depth map. In this study, we have used Velodyne VLP-16 LiDAR and a monocular camera. Our method is based on the covariance matrix in which the depth value assigned to the empty locations in depth map is decided according to the value of covariance function in the covariance matrix. Our method surpassed the GPR in run time and interpolation result. This shows that our approach is fast enough in real-time for autonomous vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call