Abstract

Deep convolutional neural networks have been popularly applied in single-image deraining recently. Nevertheless, as the network becomes deeper, it is easy to cause training over-fitting and performance saturation, particularly in the case of insufficient training data. In this paper, we report the design of a new network, namely parallel deraining convolutional neural network (PARDNet), for single-image deraining. Specifically, PARDNet adopts two parallel residual sub-networks based on different receptive fields to extract more comprehensive characteristics of the rain streaks, as well as decrease the depth of the network. The hybrid dilated convolution is employed to enlarge the sub-network’s receptive field to capture more context information. The efficient channel attention module is integrated into the proposed PARDNet to capture rain streaks more effectively and preserve more background details. Furthermore, to facilitate the network training, the residual learning is also fused into PARDNet in a holistic way. Extensive experiments on synthetic and real-world rainy image datasets demonstrate the superiority of PARDNet for single-image deraining.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call