Multi-camera depth estimation has gained significant attention in autonomous driving due to its importance in perceiving complex environments. However, extending monocular self-supervised methods to multi-camera setups introduces unique challenges that existing techniques often fail to address. In this paper, we propose STViT+, a novel Transformer-based framework for self-supervised multi-camera depth estimation. Our key contributions include: 1) the Spatial-Temporal Transformer (STTrans), which integrates local spatial connectivity and global context to capture enriched spatial-temporal cross-view correlations, resulting in more accurate 3D geometry reconstruction; 2) the Spatial-Temporal Photometric Consistency Correction (STPCC) strategy that mitigates the impact of varying illumination, ensuring brightness consistency across frames during photometric loss calculation; 3) the Adversarial Geometry Regularization (AGR) module, which employs Generative Adversarial Networks to impose spatial constraints by using unpaired depth maps, enhancing performance under adverse conditions such as rain and nighttime driving. Extensive evaluations on large-scale autonomous driving datasets, including Nuscenes and DDAD, confirm that STViT+ sets a new benchmark for multi-camera depth estimation.
Read full abstract