Abstract

Depth estimation is crucial for scene understanding and downstream tasks, especially the self-supervised training methods showing great potential. The overall structure and local details of the scene are essential for improving the quality of depth estimation. The proposal of Monodepth2 has led to significant progress in self-supervised monocular depth estimation. However, Monodepth2 uses the most basic encoder–decoder architecture. The limited data flow information of the network leads to a large semantic gap between the encoder and the decoder, which reduces the accuracy of the network for fine-grained feature recognition. Monodepth2 adopts Resnet18 pre-trained on the Imagenet dataset as the encoder. This traditional convolutional pooling structure results in a loss of pixel information in the network at every scale. In order to solve this problem, this paper proposes an improved DepthNet. The network adopts Hrnet in semantic segmentation as the base encoder, which adopts an advanced multi-scale fusion method in the whole process, thus avoiding the loss of pixel information. An additional densely connected U-Net is employed at the decoder side to provide more information flow. Furthermore, the semantic gap between the encoder and decoder is reduced by adding different numbers of residual connections and channel attention on each layer. The network structure can be regarded as a collection of fully convolutional networks. Since the deep features of the network have a higher correlation with the vertical position, we add a spatial location attention module to the deep-level network to reduce this semantic gap. The approach performs significantly well on the KITTI dataset benchmark, with several performance criteria comparable to supervised monocular depth inference methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call