Abstract

Existing methods for single images raindrop removal either have poor robustness or suffer from parameter burdens. In this paper, we propose a new Adjacent Aggregation Network (A 2 Net) with lightweight architectures to remove raindrops from single images. Instead of directly cascading convolutional layers, we design an adjacent aggregation architecture to better fuse features for rich representations generation, which can lead to high quality images reconstruction. To further simplify the learning process, we utilize a problem-specific knowledge to force the network focus on the luminance channel in the YUV color space instead of all RGB channels. By combining adjacent aggregating operation with color space transformation, the proposed A 2 Net can achieve state-of-the-art performances on raindrop removal with significant parameters reduction.

Highlights

  • Severe weather conditions, such as rain [1], haze [2], [3] and snow [4], impact human visual perception and outdoor computer vision systems [5]

  • Since many computer vision systems are designed based on the assumption of clean inputs, their performance is affected by adhered raindrops

  • We force our network to focus on the luminance channel of the YUV color space

Read more

Summary

INTRODUCTION

Severe weather conditions, such as rain [1], haze [2], [3] and snow [4], impact human visual perception and outdoor computer vision systems [5]. The light passing through a raindrop and converge to a point, causing a significant change in the luminance of the raindrop occluded area, while the chrominance will not be affected much Based on this observation, we force our network to focus on the luminance channel of the YUV color space. Lin et al.: A2Nets for Image Raindrop Removal erating more informative representations By directly deploying this simple aggregation into existing network architectures, the raindrop removal performance can be significantly improved without increasing parameters. Instead of directly processing RGB, we intentionally force the network to focus on the luminance (Y channel), but less on the chrominance (UV channels) This divide-and-conquer strategy can effectively simplify the learning process. The A2Net exerts less pressure on system resources (e.g. CPU and memory), which makes it more suitable for practical applications

RELATED WORKS
TRAINING DETAILS
EXPERIMENTS
FUTURE WORK
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.