Abstract

Cameras convert optical signals into electronic information to enable image acquisition. However, cameras are susceptible to environmental factors such as light and weather conditions. The information captured by cameras in rainy condition tends to degrade, resulting in reduced contrast and visibility of the images. Deraining aims to restore the image degraded by raindrops and rain accumulation in rainy weather. Most state-of-the-art, end-to-end neural network-based deraining methods have achieved satisfactory results. By considering the features from different scales, we proposed a further improved multiscale information exchange module to extract features from different scales. As a result, the structure achieves a competitive effect on rain mask detection. Moreover, experiments have found that the rain layer-based prediction models tend to leave rain streak residuals in specific image patches. Therefore, we further developed the Feature Attention Compensation module (FAC) to better utilize the deraining image and rain layer information obtained from the multiscale model, which is demonstrated to boost deraining performance. To summarize, we design a coarse-to-fine rain removal model, starting with a rain detection network to get a coarse rain-free image, which is further refined with the FAC module to eventually create a fine rain-free image. We conduct our experiments on both synthetic and real-world datasets. Quantitative and qualitative experimental results demonstrate that the proposed method outperforms the state-of-the-art deraining methods. Source code will be available at https://github.com/Jstar-s/CFMFN

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call