Abstract

The quality of images captured in rainy days is severely degraded, which affects the accuracy of subsequent computer vision tasks. Recently, many deep learning-based methods have demonstrated superior performance for single image deraining. However, there are still many issues left. Since real-world rain images and their corresponding ground truths are difficult to collect, models trained on limited data may lead to overfitting. Meanwhile, although many methods can remove part of the rain streaks, most of them cannot reconstruct precise edges and textures. For the first issue, we use the transfer learning approach. Loading pre-trained parameters trained on the ImageNet enables the network to have robust feature representation, which improves the generalization of the network. For the second issue, we restore clear details by making full use of the frequency domain information of the image. Specifically, we design a novel frequency domain residual block (FRDB) and use an efficient fusion strategy in FRDB to fuse spatial and frequency domain features. Then, we propose a frequency domain reconstruction loss function (FDR loss) to restore details by reducing the differences in high-frequency space. Finally, a simple detail enhancement attention module (DEAM) is used to further enhance the image details. Extensive experimental results demonstrate that our DPNet has superior performance on both synthetic and real data. Furthermore, we verify the effectiveness of our method on downstream computer vision tasks. The source codes will be open at https://github.com/noxsine/DPNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call