Abstract

Depth estimation is a fundamental problem for light field based applications. Although recent learning-based methods have proven to be effective for light field depth estimation, they still have troubles when handling occlusion regions. In this paper, by leveraging the explicitly learned occlusion map, we propose an occlusion-aware network, which is capable of estimating accurate depth maps with sharp edges. Our main idea is to separate the depth estimation on non-occlusion and occlusion regions, as they contain different properties with respect to the light field structure, i.e., obeying and violating the angular photo consistency constraint. To this end, three modules are involved in our network: the occlusion region detection network (ORDNet), the coarse depth estimation network (CDENet), and the refined depth estimation network (RDENet). Specifically, ORDNet predicts the occlusion map as a mask, while under the guidance of the resulting occlusion map, CDENet and REDNet focus on the depth estimation on non-occlusion and occlusion areas, respectively. Experimental results show that our method achieves better performance on 4D light field benchmark, especially in occlusion regions, when compared with current state-of-the-art light-field depth estimation algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.