Abstract

Monocular depth estimation is of vital importance in understanding the 3D geometry of a scene. However, inferring the underlying depth is ill‐posed and inherently ambiguous. In this study, two improvements to existing approaches are proposed. One is about a clean improved network architecture, for which the authors extend Densely Connected Convolutional Network (DenseNet) to work as end‐to‐end fully convolutional multi‐scale dense networks. The dense upsampling blocks are integrated to improve the output resolution and selected skip connection is incorporated to connect the downsampling and the upsampling paths efficiently. The other is about edge‐preserving loss functions, encompassing the reverse Huber loss, depth gradient loss and feature edge loss, which is particularly suited for estimation of fine details and clear boundaries of objects. Experiments on the NYU‐Depth‐v2 dataset and KITTI dataset show that the proposed model is competitive to the state‐of‐the‐art methods, achieving 0.506 and 4.977 performance in terms of root mean squared error respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.