Abstract

Depth estimation from single monocular image is a vital but challenging task in 3D vision and scene understanding. Previous unsupervised methods have yielded impressive results, but the predicted depth maps still have several disadvantages such as missing small objects and object edge blurring. To address these problems, a multi-scale spatial attention guided monocular depth estimation method with semantic enhancement is proposed. Specifically, we first construct a multi-scale spatial attention-guided block based on atrous spatial pyramid pooling and spatial attention. Then, the correlation between the left and right views is fully explored by mutual information to obtain a more robust feature representation. Finally, we design a double-path prediction network to simultaneously generate depth maps and semantic labels. The proposed multi-scale spatial attention-guided block can focus more on the objects, especially on small objects. Moreover, the additional semantic information also enables the objects edge in the predicted depth maps more sharper. We conduct comprehensive evaluations on public benchmark datasets, such as KITTI and Make3D. The experiment results well demonstrate the effectiveness of the proposed method and achieve better performance than other self-supervised methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call