Abstract
Thanks to the powerful learning capability, deep neural networks (DNNs) have acquired broad applications in single image reflection removal. The DNN-based algorithms relax the constraints of specific priors and learn to generate visually pleasant background layers from massive training data. However, most of them employ a single network structure to recover both the semantic information and local details of the background, which may lead to obvious reflection residue or even failure. To mitigate this deficiency, in this work, we propose a Multi-stage Curvature-guided De-Reflection Network (MCDRNet), which combines multiple network architectures in a unified framework to progressively reconstruct the background layer and refine the fine-grained details. Our framework consists of three stages, where the encoder-decoders are exploited in the first two stages to recover the semantic components of background layers with lower scales and a variant ResNet is applied in the last stage to refine the background details with the original input resolution. In the first two stages, to introduce the structural guidance for the reflection removal, we cascade another decoder branch to restore the curvature map of the background. In addition, at the end of the first two stages, instead of directly passing the intermediate estimates to the next stage, we propose a Non-local Attention Module (NAM) to augment and transmit the features from decoders. Extensive experimental results on several public datasets demonstrate that the proposed MCDRNet outperforms the state-of-the-art methods quantitatively and generates visually better reflection removal results. The source code and pre-trained models are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/NamecantbeNULL/MCDRNet</uri> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.