Abstract

Most existing single image deraining networks are trained in a supervised way, which relies on paired images including one clean image and one rain image. In most cases, the rain images are synthesized from the clean ones manually to obtain sufficient paired images. However, not only huge time costs but expert knowledge are needed to ensure the synthesized images are realistic enough. In addition, the superior performance of these deraining networks trained on manually synthesized rain images is hard to be maintained when testing on real rain images. To address these issues, we propose a scene adaptive asymmetric CycleGAN (SAA-CycleGAN) which transfers clean images to their rainy counterparts automatically so that adequate realistic rain images can be obtained for training deraining networks in a supervised way. Moreover, SAA-CycleGAN can both remove rain from rainy images and synthesize rain on clean images benefiting from the cycle consistency strategy. Since the information is not symmetric during the rain synthesis process and the deraining process, the generators are designed with different architecture accordingly for these two processes. Comprehensive experiments show that the SAA-CycleGAN is able to synthesize more lifelike rain images and achieve similar deraining performance compared with the state-of-the-art deraining methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.