Abstract

Image deraining is a low-level restoration task that has become quite popular during the past decades. Although recent data-driven deraining models exhibit promising results, most of these models are trained on synthetic rain data sets which do not generalize well when applied to real rain images. While recent real-rain data sets have achieved favorable generalization performance, generating rain-free ground-truths can be tedious and time-consuming. To address this problem, in this work, we present rain to rain training, an unsupervised training method for single image deraining. Our experiments show that it is possible to train single image deraining models by using only rain images. This can be achieved by simply training models to map pairs of rain images. We also introduce the idea of using the least overlapping training pairs, a method of selecting adequate training pairs that enables rain to rain training to achieve equivalent deraining performance compared to supervised training.

Highlights

  • IntroductionNumerous breakthroughs have been witnessed both in high-level and low-level tasks like image classification [1]–[3], object detection [4]–[6], image denoising [7]–[9], inpainting [10], single image super resolution [11] and more

  • Deep learning has become the most popular approach for computer vision tasks

  • If our aim is to generate rain images, other generative approaches can be considered such as variational autoencoders (VAE) or generative adversarial neural networks (GAN)

Read more

Summary

Introduction

Numerous breakthroughs have been witnessed both in high-level and low-level tasks like image classification [1]–[3], object detection [4]–[6], image denoising [7]–[9], inpainting [10], single image super resolution [11] and more. These deep learning models are notoriously successful when trained on a supervised fashion, when well labeled large data sets are at hand to feed the optimization process. In [12]–[14], it has been shown that various computer vision algorithms exhibit performance degradation when applied to images contaminated by atmospheric degradation

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.