Abstract

Single-image deraining aims to restore the image that is degraded by the rain streaks, where the long-standing bottleneck lies in how to disentangle the rain streaks from the given rainy image. Despite the progress made by substantial existing works, several crucial questions - e.g., How to distinguish rain streaks and clean image, while how to disentangle rain streaks from low-frequency pixels, and further prevent the blurry edges - have not been well investigated. In this paper, we attempt to solve all of them under one roof. Our observation is that rain streaks are bright stripes with higher pixel values that are evenly distributed in each color channel of the rainy image, while the disentanglement of the high-frequency rain streaks is equivalent to decreasing the standard deviation of the pixel distribution for the rainy image. To this end, we propose a self-supervised rain streaks learning network to characterize the similar pixel distribution of the rain streaks from a macroscopic viewpoint over various low-frequency pixels of gray-scale rainy images, coupling with a supervised rain streaks learning network to explore the specific pixel distribution of the rain streaks from a microscopic viewpoint between each paired rainy and clean images. Building on this, a self-attentive adversarial restoration network comes up to prevent the further blurry edges. These networks compose an end-to-end Macroscopic-and-Microscopic Rain Streaks Disentanglement Network, named [Formula: see text]RSD-Net, to learn rain streaks, which is further removed for single image deraining. The experimental results validate its advantages on deraining benchmarks against the state-of-the-arts. The code is available at: https://github.com/xinjiangaohfut/MMRSD-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call