Self-supervised monocular depth estimation methods have shown promising results by leveraging geometric relationships among image sequences for network supervision. However, existing methods often face challenges such as blurry depth edges, high computational overhead, and information redundancy. This paper analyzes and investigates technologies related to deep feature encoding, decoding, and regression, and proposes a novel depth estimation network termed HPD-Depth, optimized by three strategies: utilizing the Residual Channel Attention Transition (RCAT) module to bridge the semantic gap between encoding and decoding features while highlighting important features; adopting the Sub-pixel Refinement Upsampling (SPRU) module to obtain high-resolution feature maps with detailed features; and introducing the Adaptive Hybrid Convolutional Attention (AHCA) module to address issues of local depth confusion and depth boundary blurriness. HPD-Depth excels at extracting clear scene structures and capturing detailed local information while maintaining an effective balance between accuracy and parameter count. Comprehensive experiments demonstrate that HPD-Depth performs comparably to state-of-the-art algorithms on the KITTI benchmarks and exhibits significant potential when trained with high-resolution data. Compared with the baseline model, the average relative error and squared relative error are reduced by 6.09% and 12.62% in low-resolution experiments, respectively, and by 11.3% and 18.5% in high-resolution experiments, respectively. Moreover, HPD-Depth demonstrates excellent generalization performance on the Make3D dataset.
Read full abstract