Abstract

Monocular depth estimation is a fundamental task in autonomous driving, robotics, virtual reality. Monocular depth estimation is attracting research due to the efficiency of predicting depth map from a single RGB image. However, Monocular depth estimation is an ill-posed problem and is sensitive to image compositions such as light condition, occlusion, noise. We propose an encoder-decoder based network that uses multi-level attention and aggregate densely weighted feature map. Our model is evaluated on NYU Depth v2. Experimental results demonstrated that our model achieves promising performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call