Abstract

In the computer vision field, single image rain removal is an extremely important pre-processing task since rain streaks can degrade the performance of subsequent high-level tasks and outdoor image quality. In recent years, massive deep learning-based deraining methods have been proposed to remove rain streaks from rainy images, and they have achieved promising performance. However, most deep learning-based methods lack interpretability and have limited performance in image details restoration and rain streaks removal. In this paper, we propose a novel rain streaks model-driven deep network, MSANet, to alleviate these issues. First of all, instead of simply stacking those available convolutions, pooling, and attention blocks, the structure of MSANet is derived from a prior rain removal model and the structure is thus fully interpretable. Compared to prior models available in the literature, we introduce a weighted convolutional dictionary model to capture the shapes, sizes, directions, regions, and multi-scale information of rain streaks. And to reduce the bias associated with the L2-norm, we utilize an adaptive approach to constrain the fidelity term of the model. Finally, we employ the alternating direction method of multipliers (ADMM) algorithm to optimize this model and unfold the optimization procedure into a new neural network architecture by modeling each operation in the algorithm using network blocks. All the parameters can be automatically learned by end-to-end training. Extensive experiments on several benchmark datasets demonstrate that the proposed network architecture is more efficient than state-of-the-art methods based on both subjective and objective evaluations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call