Abstract

Image deraining aims to remove rain streaks from images and reduce information loss in outdoor images caused by rain. As a fundamental task in image processing, image deraining not only enhances the visibility of images but also provides necessary image restoration for advanced vision tasks. Existing image deraining models mostly train end-to-end models by minimizing the similarity between the output image of the model and the rain-free ground truth. Although these methods have achieved significant results, they often perform poorly in the face of dense and changing rain streak scenes. In this paper, we propose a novel method, called Dual-Channel Component Decomposition Network (DCD-Net). The basic idea of DCD-Net is to leverage the separability prior of rainy images, treats the rain-free background layer and the rain streak mask layer as two parallel component extraction tasks. To this end, it builds a dual-branch parallel networks that extract the rain-free background image and decouple the reconstruction information of the rain streak mask, respectively. It finally applies a composite multi-level contrastive supervision to the output of the above dual-branch parallel network, thereby achieving rain streak removal. Extensive experiments on various datasets demonstrate that the proposed model outperforms existing methods in deraining dense rain streak images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call