Dense reconstruction have been studied for decades in the fields of computer vision and robotics, in which LiDAR and camera are widely used. However, vision-based methods are sensitive to illumination variation and lack direct depth, and LiDAR-based methods are limited by sparse LiDAR measurement and lacking color and texture information. In this paper, we propose a novel 3D reconstruction algorithm based on LiDAR and a monocular camera, which realizes dense reconstruction. In the algorithm, a LiDAR odometry is used to get accurate poses and poses calculated by the odometry module are used in the calculation of depth maps and fusion of depth maps, and then mesh and texture mapping are implemented. In addition, a semantic segmentation network and a depth completion network are used to obtain dense and accurate depth maps. The concept of symmetry is utilized to generate 3D models of objects or scenes; that is, the reconstruction and camera imaging of these objects or scenes are symmetrical. Experimental results on public dataset show that the proposed algorithm achieves higher accuracy, efficiency and completeness than existing methods.
Read full abstract