Abstract

For the self-supervised image depth estimation algorithm based on deep learning, the characteristics of self-supervised methods cause some problems such as visual shadow and infinity estimation. This paper tackles these problems by adopting the spatial consistency loss and multiple loss constraints. In this paper, the depth estimated network and pose estimated network are trained by binocular image sequence. And the depth estimated problem is transformed into image reconstruction and scene reconstruction to realize self-supervised training. Based on the Deep Convolutional Neural Networks(DCNN), the depth estimated network introduces the Deep Atrous Convolution (DAC) and the Atrous Spatial Pyramid Pooling (ASPP) modules to extract the multi-scale feature and increase the receptive field of the network. The spatial consistency of scene reconstruction strategy greatly reduces the estimated error of pose network. The quantitative and qualitative evaluation on the KITTI dataset shows that the proposed algorithm outperforms the existing self-supervised depth estimation algorithms in accuracy and error, even exceeds most of the supervised algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call