Abstract
Despite the fact that great progress has been made on single image deraining tasks, it is still challenging for existing models to produce satisfactory results directly, and it often requires a single or multiple refinement stages to gradually improve the quality. However, in this paper, we demonstrate that existing image-level refinement with a stage-independent learning design is problematic with the side effect of over/under-deraining. To resolve this issue, we for the first time propose the mechanism of learning to carry out refinement on the unsatisfactory features, and propose a novel attentive feature refinement (AFR) module. Specifically, AFR is designed as a two-branched network for simultaneous rain-distribution-aware attention map learning and attention guided hierarchy-preserving feature refinement. Guided by task-specific attention, coarse features are progressively refined to better model the diversified rainy effects. By using a separable convolution as the basic component, our AFR module introduces little computation overhead and can be readily integrated into most rainy-to-clean image translation networks for achieving better deraining results. By incorporating a series of AFR modules into a general encoder-decoder network, AFR-Net is constructed for deraining and it achieves new state-of-the-art results on both synthetic and real images. Furthermore, by using AFR-Net as a teacher model, we explore the use of knowledge distillation to successfully learn a student model that is also able to achieve state-of-the-art results but with a much faster inference speed (i.e., it only takes 0.08 second to process a 512×512 rainy image). Code and pre-trained models are available at 〈 https://github.com/RobinCSIRO/AFR-Net 〉 .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.