Abstract

In this paper, we propose a Deep Dense Network for Depth Completion Task (DeepDNet) towards generating dense depth map using sparse depth and captured view. Wide variety of scene understanding applications such as 3D reconstruction, mixed reality, robotics demand accurate and dense depth maps. Existing depth sensors capture accurate and reliable sparse depth and find challenges in acquiring dense depth maps. Towards this we plan to utilise the accurate sparse depth as input with RGB image to generate dense depth. We model the transformation of random sparse input to grid-based sparse input using Quad-tree decomposition. We propose Dense-Residual-Skip (DRS) Autoencoder along with an attention towards edge preservation using Gradient Aware Mean Squared Error (GAMSE) Loss. We demonstrate our results on the NYUv2 dataset and compare it with other state of the art methods. We also show our results on sparse depth captured by ARCore depth API with its dense depth map. Extensive experiments suggest consistent improvements over existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.