Abstract

Deep Learning (DL) has achieved significant progress in single image deraining methods. Most of current DL methods, however, are still weak in image detail recovery and feature inherent correlation learning. In this work, we explore the detail recovery mechanisms in both network architecture and loss function for image deraining. A U-net like architecture named progressive dense feature fusion network (PDFFN) is proposed to encode rain images and decode them as clean ones. Specifically, a residual dense connection unit (ReDCU) is designed to handle rain streaks of various blurring degrees and resolutions by enriching features. Moreover, a progressive feature fusion module (PFFM), which fuses the features in different stages of U-net progressively, is devised to not only capture the inherent correlations of features across encoder and decoder but also intensify the fine-gained details. To better evaluate the perceptual similarity between the ground-truth and derained image, we propose detail perceptual loss by focusing on low-level unactivated features. Apart from the global rain removal strategy, this paper applies contextual loss on target regions to conduct joint deraining and detection. Comprehensive experiments substantiate the superiority of the proposed method, especially its detail recovery capability, as compared with state-of-the-art methods both qualitatively and quantitatively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call