Monocular depth estimation is a challenging problem, especially in a self-supervised manner without relying on the depth ground truth. The self-supervised approach places higher demands on the global and local feature extraction capabilities of the network. Based on this view, we propose to perform efficient hybrid feature extraction by exploiting the capacity of the Transformer and convolutional neural network to model long-range dependencies and local correlations at the same time. A bowknot-type fuser is designed to align features extracted from different sources as well as bridge global and local semantic representations. To obtain higher-quality pseudo-labels from the extracted features, we suggest the pseudo-label smoothing technique to fully utilize the multi-scale features, consequently boosting the effect of self-distillation loss as an auxiliary supervision to train the neural network. In addition, we propose pixel adaptive smoothness loss to refine the predicted depth map by introducing the image’s textural and spatial information. The suggested method is trained on the KITTI benchmark using stereo image pairs and achieves competitive depth estimation performance in contrast with previous approaches. The code and models are available at https://github.com/MaylingLin/BLGR-Depth.
Read full abstract