Abstract

Limited by the range of sensor, constructing a dense outdoor map of large scene with a depth camera is difficult. Since the state-of-the-art performance on data fusion for pixel-level distance estimation, this paper proposes a mapping system which employs depth image generated by an unsupervised Lidar-Stereo network to achieve accurate reconstruction. Firstly, in the ORBSLAM2 stereo mode, the system estimates the pose information to finish camera localization. And in another thread LidarStereoNet is used to generate a depth image within visual angle to describe the pixel-level distance. Then, for correcting the completed depth image to truth ground, a nonlinear transformation between the generation and the rendering is performed to restore the accurate distance relationship. Finally, the RGB image, pose information and depth rendering image are entered into the dense mapping module to get a large-scale point cloud model. After running on KITTI dataset, the experiment results show that the system can construct a pretty comprehensive scene model after only one trip, and using the numerical truncation and filtering methods to extract usable areas can make the mapping results more accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call