Abstract
In recent years, deep learning has significantly advanced the development of image depth estimation algorithms. The depth estimation network with single-view input can only extract features from a single 2D image, often neglecting the information contained in neighboring views, resulting in learned features that lack real geometrical information in the 3D world and stricter constraints on the 3D structure, leading to limitations in the performance of image depth estimation. In the absence of accurate camera information, the multi-view geometric cues obtained by some methods may not accurately reflect the real 3D structure, resulting in a lack of multi-view geometric constraints in image depth estimation algorithms. To address this problem, a multi-view projection-assisted image depth estimation network is proposed, which integrates multi-view stereo vision into a deep learning-based encoding-decoding image depth estimation framework without pre-estimation of view bitmap. The network estimates optical flow for pixel-level matching across views, thereby projecting the features of neighboring views to the reference viewpoints for self-attentive feature aggregation, compensating for the lack of stereo geometry information in the image depth estimation framework. Additionally, a multi-view reprojection error is designed for supervised optical flow estimation to effectively constrain the optical flow estimation process. In addition, a long-distance attention decoding module is proposed to achieve effective extraction and aggregation of features in distant areas of the scene, which enhances the perception capability for outdoor long-distance. Experimental results on the KITTI dataset, vKITTI dataset, and SeasonDepth dataset demonstrate that our method achieves significant improvements compared to other state-of-the-art depth estimation techniques. This confirms its superior performance in image depth estimation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.