Abstract

Separating the background and the reflection layer is an ill-posed but meaningful task. Since the two layers often have different depths, Light Fields (LFs), which contain the depth information, show great potential to deal with this task. In this paper, we propose to fully exploit the difference between background and reflection layers from different kinds of images in LFs. Specifically, we fuse the complementary features from sub-aperture images, in which the overlaps of the two layers vary a lot. Then we propose the adaptive focus selection strategy and use the dynamic filter to select the appropriate focus. The refocused image is then obtained from the focal stack images, where the clearness of the two layers is different. Finally, the clean images are restored using a dual attention reconstruction module. Our model is the first one that successfully addresses the LF reflection separation task with an end-to-end deep neural network. The experiments show that the background and reflection layers are well separated, regardless of the relative intensity difference between the two layers on both synthetic and real-world datasets. The quantitative and qualitative comparison results show that our method achieves better performance than other state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.