Abstract

Depth completion, which combines additional sparse depth information from the range sensors, substantially improves the accuracy of monocular depth estimation, especially using the deep-learning-based methods. However, these methods can hardly produce satisfactory depth results when the sensor configuration changes at test time, which is important for real-world applications. In this paper, the problem is tackled by our proposed novel two-stage mechanism, which decomposes depth completion into two subtasks, namely relative depth map estimation and scale recovery. The relative depth map is first estimated from a single color image with our designed scale-invariant loss function. Then the scale map is recovered with the additional sparse depth. Experiments on different densities and patterns of the sparse depth input show that our model always produces satisfactory depth results. Besides, our approach achieves state-of-the-art performance on the indoor NYUv2 dataset and performs competitively on the outdoor KITTI dataset, demonstrating the effectiveness of our method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.