Abstract

Monocular depth estimation is very challenging in complex compositions depicting multiple objects of diverse scales. Albeit the recent great progress thanks to the deep convolutional neural networks, the state-of-the-art monocular depth estimation methods still fall short to handle such real-world challenging scenarios. In this paper, we propose a deep end-to-end learning framework to tackle these challenges, which learns the direct mapping from a color image to the corresponding depth map. First, we represent monocular depth estimation as a multi-category dense labeling task by contrast to the regression-based formulation. In this way, we could build upon the recent progress in dense labeling such as semantic segmentation. Second, we fuse different side-outputs from our front-end dilated convolutional neural network in a hierarchical way to exploit the multi-scale depth cues for monocular depth estimation, which is critical in achieving scale-aware depth estimation. Third, we propose to utilize soft-weighted-sum inference instead of the hard-max inference, transforming the discretized depth scores to continuous depth values. Thus, we reduce the influence of quantization error and improve the robustness of our method. Extensive experiments have been conducted on the Make3D, NYU v2, and KITTI datasets and superior performance have been achieved on NYU v2 and KITTI datasets compared with current state-of-the-art methods, which shows the superiority of our method. Furthermore, experiments on the NYU v2 dataset reveal that our classification based model is able to learn the probability distribution of depth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call