Abstract

Mapping systems that turn sensor data into a model of the environment are standard components in mobile robotics. Outdoor robots are often equipped with 3D LiDAR sensors to obtain accurate range measurements at a high frame rate. The price for a robotic LiDAR sensor scales roughly linearly with the number of beams and thus the vertical resolution of the scanner. In general, the cheaper the sensors, the sparser the point cloud. In this letter, we address the problem of building dense models from sparse range data. Instead of requiring the vehicle to move slowly through the environment or to traverse the scene multiple times to cover the space densely, we investigate geometric scan completion through a learning-based approach. We revisit the traditional volumetric fusion pipeline based on truncated signed distance fields (TSDF) and propose a neural network to aid the 3D reconstruction on a frame-to-frame basis by completing each scan towards a dense TSDF volume. We propose a geometric scan completion network that is trained in a self-supervised fashion without labels. Our experiments illustrate that such frame-wise completion leads to maps that are on-par or even better compared to maps generated using a higher resolution LiDAR sensor. We additionally show that our system can be used to improve the performance of SLAM systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call