Abstract

Camera and LIDAR provide complementary information for robots to perceive the environment. In this paper, we present a system to fuse laser point cloud and visual information at the data level. Generally, cameras and LIDARs mounted on the unmanned ground vehicle have different viewports. Some objects which are visible to a LIDAR may become invisible to a camera. This will result in false depth assignment for the visual image and incorrect colorization for laser points. The inputs of the system are a color image and the corresponding LIDAR data. Coordinates of 3D laser points are first transformed into the camera coordinate system. Points outside the camera viewing volume are clipped. A new algorithm is proposed to recreate the underlying object surface of the potentially visible laser points as quadrangle mesh by exploiting the structure of the LIDAR as a priori. False edge is eliminated by constraining the angle between the laser scan trace and the radial direction of a given laser point, and quadrangles with non-consistent normal are pruned. In addition, the missing laser points are solved to avoid large holes in the reconstructed mesh. At last z-buffer algorithm is used to work for occlusion reasoning. Experimental results show that our algorithm outperforms the previous one. It can assign correct depth information to the visual image and provide the exact color to each laser point which is visible to the camera.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.