Abstract

Rain streaks can affect visual visibility, and hence disable many visual algorithms. So we present a double recurrent dense network for removing rain streaks from single image. Assume the rain image is the superposition of the clean image and the rain streaks, we directly learn the rain streaks from the rainy image. In contrast to other models, we introduce a double recurrent scheme to promote better information reuse of rain streaks and relative clean image. For rain streaks, the LSTM cascaded by DenseNet blocks is used as the basic model. The relative clean image predicted by subtracting the rain streaks from the rainy image is then input to the basic model in an iterative way. Benefiting from double recurrent schemes, our model makes full use of rain streaks and image detail information and thoroughly removes rain streaks. Furthermore, we adopt a mix of $L_{1}$ loss, $L_{2}$ loss and SSIM loss to guarantee good rain removing performance. We conduct a plenty of experiments on synthetic and real rainy images, even on similar denoise task, the results not only show our model significantly outperforms the state-of-art methods for removing rain streaks, but also exhibit our model has a high effectiveness for similar task, i.e. image denoising.

Highlights

  • Rain is a natural and common weather phenomenon and will cause the objects in an image blurred due to the influences of light refraction and scattering on rain streaks

  • In order to solve the above problems, we propose a double recurrent dense network for removing rain streaks from single image

  • These methods based on deep learning improve the deraining performances of removing rain streaks from single image

Read more

Summary

INTRODUCTION

Rain is a natural and common weather phenomenon and will cause the objects in an image blurred due to the influences of light refraction and scattering on rain streaks. The heavy rain image has a high density and uneven distribution of rain, which makes the rain removal task very challenging Many models for this problem have been proposed including residual blocks [7], dilated convolution [8], [10], squeezeand-excitation [10], and recurrent layers [10], [11], and multistage networks [10]. Y. Lan et al.: Double RDNet for Single Image Deraining context information into account but ignore feature reuse; Secondly, the loss functions of these methods [7], [8], [10] are based on the L2 norm. In order to solve the above problems, we propose a double recurrent dense network for removing rain streaks from single image. Our network model is far more effective than current methods in removing light rain and heavy rain, but experiments shows that it is superior to the existing Gaussian denoising network model in removing Gaussian noise

RELATED WORK
THE PROPOSED DERAINING METHOD
THE DESIGN OF RDNET
RLDNET ARCHITECTURE
RGDNET ARCHITECTURE
LOSS FUNCTION
EXPERIMENTAL RESULTS
ANALYSIS ON OUR PROPOSED MODEL
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call