Abstract

3D laser scanning by LiDAR sensors plays an important role for mobile robots to understand their surroundings. Nevertheless, not all systems have high resolution and accuracy due to hardware limitations, weather conditions, and so on. Generative modeling of LiDAR data as scene priors is one of the promising solutions to compensate for unreliable or incomplete observations. In this paper, we propose a novel generative model for learning LiDAR data based on generative adversarial networks. As in the related studies, we process LiDAR data as a compact yet lossless representation, a cylindrical depth map. However, despite the smoothness of real-world objects, many points on the depth map are dropped out through the laser measurement, which causes learning difficulty on generative models. To circumvent this issue, we introduce measurement uncertainty into the generation process, which allows the model to learn a disentangled representation of the underlying shape and the dropout noises from a collection of real LiDAR data. To simulate the lossy measurement, we adopt a differentiable sampling framework to drop points based on the learned uncertainty. We demonstrate the effectiveness of our method on synthesis and reconstruction tasks using two datasets. We further showcase potential applications by restoring LiDAR data with various types of corruption.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.