Estimating depth from light field images is a critical issue in light field applications. While learning-based methods have made significant strides in light field depth estimation, achieving high accuracy and speed simultaneously remains a major challenge. This paper proposes a light field depth estimation network based on edge enhancement and feature modulation, which significantly improves depth estimation results by emphasizing inter-view correlations while preserving image edge features. Specifically, to prioritize edge details, we introduce an Edge-Enhanced Cost Constructor (EECC) that integrates edge information with existing cost constructors to improve depth estimation performance in complex areas. Furthermore, most light field depth estimation networks utilize only sub-aperture images (SAIs) without considering the inherent angular information in macro-pixel image (MacPI). To address this limitation, we propose the MacPI-Guided Feature Modulation (MGFM) module, which leverages angular information between different views in MacPI to modulate features at each view. Experimental results show that our method not only performs excellently on synthetic datasets but also demonstrates outstanding generalization on real-world datasets, achieving a better balance between accuracy and computation speed.
Read full abstract