Abstract

The objective of depth completion is to generate a dense depth map by upsampling a sparse one. However, irregular sparse patterns or the lack of groundtruth data caused by unstructured data make depth completion extremely challenging. Sensor fusion using both RGB and LIDAR sensors can help produce a more reliable context with higher accuracy. Compared with previous approaches, this method takes semantic segmentation images as additional input and develops an unsupervised loss function. Thus, when combined with supervised depth loss, the depth completion problem is considered as semi-supervised learning. We used an adapted Wasserstein Generative Adversarial Network architecture instead of applying the traditional autoencoder approach and post-processing process to preserve valid depth measurements received from the input and further enhance the depth value precision of the results. Our proposed method was evaluated on the KITTI depth completion benchmark, and its performance in depth completion was proven to be competitive.

Highlights

  • Dense depth maps are a fundamental feature for many vision functions installed in autonomous vehicles, such as 3D reconstruction, 3D object detection, and localization

  • In the first part, we provide an overview of our proposed method; we briefly discuss the difficulty involved in training the original generative adversarial network (GAN) and the way Wasserstein GAN (WGAN) compensates for these difficulties

  • We argued that because the measurement in this region is closer to the sensor; the laser beams can a have better reflection to the sensor and suffer lesser noise of from the surrounding environment than the rest remaining upper part of the input depth image, which is far from the LIDAR sensor

Read more

Summary

Introduction

Dense depth maps are a fundamental feature for many vision functions installed in autonomous vehicles, such as 3D reconstruction, 3D object detection, and localization. Despite its high accuracy and long-distance measurement, the depth data collected by the LIDAR sensor frequently appear to be of low resolution and are unstructured. This occurs because certain regions of a target object do not reflect the laser to the sensor, and the depth data of these regions are invalid, or the regions are identified as ‘‘holes’’ in the depth data. Depth completion is aimed at producing an accurate dense depth map from sparse input LIDAR data

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.