Abstract

Depth computation from an image is useful for many robotic systems like obstacle recognition, autonomous navigation, and 3D measurements. The estimation is best solved with Deep Neural Networks (DNN) as these are non-linear and ill-posed problems. The network takes single-color images with corresponding ground truth to predict depth-map after training. The depth accuracy, here, is dependent on the quality of ground truth and training images. Images have inherent blurs, which impact depth prediction and accuracy. In our work, we study different combinations of loss functions involving various edge functions to improve the depth of images. We use DenseNet and transfer learning method for learning and prediction of depth. Our analysis shows improvement in performance parameters as well as in the visual depth-map. We achieve 85% δ1 accuracy and improve log10 error using NYU Depth V2 dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.