Abstract

Depth estimation has achieved considerable success with the development of the depth sensor devices and deep learning method. However, depth estimation from monocular RGB-based image will increase ambiguity and is prone to error. In this paper, we present a novel approach to produce dense depth map from a single image coupled with coarse point-cloud samples. Our approach learns to fit the distribution of the depth map from source data using conditional adversarial networks and convert the sparse point clouds to dense maps. Our experiments show that the use of the conditional adversarial networks can add full image information to the predicted depth maps and the effectiveness of our approach to predict depth in NYU-Depth-v2 indoor dataset.

Highlights

  • Depth estimation is a central problem to many industrial applications, such as simultaneous localization and mapping (SLAM), robotic systems, autonomous driving and augmented reality (AR)

  • We describe an approach based on conditional Generative Adversarial Network (GAN) to reconstruct the depth map to high resolution with the limitation of the depth sensors

  • We introduce a novel depth estimation method for dense depth maps from monocular RGB images and coarse depth point clouds

Read more

Summary

Introduction

Depth estimation is a central problem to many industrial applications, such as simultaneous localization and mapping (SLAM), robotic systems, autonomous driving and augmented reality (AR). Thanks to the invention of the depth sensors including LIDAR, stereo camera and time-of-flight based depth camera, we have some devices to directly measure the depth of the environment Such depth sensors have their own drawbacks: the limited scope, light sensitivity, high price for high depth accuracy about timeof-flight based depth camera (e.g. Kinect v2), and highcost, extreme low resolution As for stereo cameras, careful calibration and large amount of computation are required for precise estimation, which usually fails to estimate under certain circumstances Owing to such disadvantages, we describe an approach based on conditional Generative Adversarial Network (GAN) to reconstruct the depth map to high resolution with the limitation of the depth sensors

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.