Abstract
Rain streaks can affect visual visibility, and hence disable many visual algorithms. So we present a double recurrent dense network for removing rain streaks from single image. Assume the rain image is the superposition of the clean image and the rain streaks, we directly learn the rain streaks from the rainy image. In contrast to other models, we introduce a double recurrent scheme to promote better information reuse of rain streaks and relative clean image. For rain streaks, the LSTM cascaded by DenseNet blocks is used as the basic model. The relative clean image predicted by subtracting the rain streaks from the rainy image is then input to the basic model in an iterative way. Benefiting from double recurrent schemes, our model makes full use of rain streaks and image detail information and thoroughly removes rain streaks. Furthermore, we adopt a mix of $L_{1}$ loss, $L_{2}$ loss and SSIM loss to guarantee good rain removing performance. We conduct a plenty of experiments on synthetic and real rainy images, even on similar denoise task, the results not only show our model significantly outperforms the state-of-art methods for removing rain streaks, but also exhibit our model has a high effectiveness for similar task, i.e. image denoising.
Highlights
Rain is a natural and common weather phenomenon and will cause the objects in an image blurred due to the influences of light refraction and scattering on rain streaks
In order to solve the above problems, we propose a double recurrent dense network for removing rain streaks from single image
These methods based on deep learning improve the deraining performances of removing rain streaks from single image
Summary
Rain is a natural and common weather phenomenon and will cause the objects in an image blurred due to the influences of light refraction and scattering on rain streaks. The heavy rain image has a high density and uneven distribution of rain, which makes the rain removal task very challenging Many models for this problem have been proposed including residual blocks [7], dilated convolution [8], [10], squeezeand-excitation [10], and recurrent layers [10], [11], and multistage networks [10]. Y. Lan et al.: Double RDNet for Single Image Deraining context information into account but ignore feature reuse; Secondly, the loss functions of these methods [7], [8], [10] are based on the L2 norm. In order to solve the above problems, we propose a double recurrent dense network for removing rain streaks from single image. Our network model is far more effective than current methods in removing light rain and heavy rain, but experiments shows that it is superior to the existing Gaussian denoising network model in removing Gaussian noise
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.