Abstract

For current learning-based single image deraining methods, deraining networks are usually designed based on a simplified linear additive rain model, which may not only cause unreal synthetic rainy images for both training and testing datasets, but also adversely affect the applicability and generality of corresponding networks. In this paper, we use the screen blend model of Photoshop as the nonlinear rainy image decomposition model. Based on this model, we design a novel channel attention U-DenseNet for rain detection and a residual dense block for rain removal. The detection sub-network not only adjusts channel-wise feature responses by our novel channel attention block to pay more attention to learn the rain map, but also combines the context information with the precise localization by the U-DenseNet to promote pixel-wise estimation accuracy. After rain detection, we use the nonlinear model to get a coarse rain-free image, and then introduce a deraining refinement subnetwork consisted of the residual dense block to obtain a fine rain-free image. For training our network, we apply the nonlinear rain model to synthesize a benchmark dataset called as RITD. It contains 3200 triplets of rainy images, rain maps, and clean background images. Our extensive quantitative and qualitative experimental results show that our method outperforms several state-of-the-art methods on both synthetic and real images.

Highlights

  • Images with rain streaks usually degrade the robustness of existing computer vision algorithms

  • We use U-DenseNet combines the context with the low-level location information to promote pixel-wise estimation accuracy, and introduce the residual dense block to fuse the hierarchical features from all the convolutional layers for accurately rain removal

  • In order to synthesize more realistic rainy images, we make the effort to enrich the dataset in two aspects: 1) we use the nonlinear rain model to combine a rain map and a clean background image into a rainy image; 2) we enrich the diversity of rain maps in the aspects of transparency, scale, and density

Read more

Summary

INTRODUCTION

Images with rain streaks usually degrade the robustness of existing computer vision algorithms. (3) The authenticity of data directly affects the robustness of the data-driven deraining networks Some of these methods [20], [22] use linear additive rain model to synthesize rainy images, which is likely to saturate some gray-white pixels of the background image. To accurately estimate the rain map containing both low-level (e.g., location and edge) and context information (e.g., density and transparency), we apply a novel channel attention U-DenseNet as the detection sub-network. It more realistically describes the relationship among them than the linear additive model Based on this model, we design a channel attention U-DenseNet for rain map detection and a residual dense block for rain removal. We use U-DenseNet combines the context with the low-level location information to promote pixel-wise estimation accuracy, and introduce the residual dense block to fuse the hierarchical features from all the convolutional layers for accurately rain removal.

RELATED WORK
DEEP LEARNING-BASED DERAINING METHODS
THE RAINY IMAGE DATASET
OUR NETWORK
DATASETS AND METRICS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call