Abstract

Depth estimation for light field images is crucial in light field applications such as image-based rendering and refocusing. Previous learning-based methods com bining neural network with cost volume can achieve accurate depth estimation but fail in regions with occlusion. In this paper, a two-stage attention-based occlusion-aware light field depth estimation network is proposed. In the initial depth estimation stage, the sub-aperture images are divided into four groups based on different view directions and four initial cost volumes are constructed using the feature maps from each group to aggregate initial depth maps. Then in the refined depth estimation stage, the four aggregated volumes from the initial stage are fused into one based on the view attention, where features of views with less occlusion are highly weighted to provide more effective information. Experiment results demonstrate that the proposed method can accomplish robust and accurate depth estimation in the presence of occlusion, which ranks the first place on 4D light field benchmark in terms of most accuracy metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.