Abstract
In this paper, we address a problem of view-disturbing raindrop removal on a single image. In existing methods to tackle this problem, machine learning based ones seem promising but require elaborate pairwise images, i.e., the raindrop-degraded image and the corresponding clean image of the same scene, for training. To overcome this drawback, we propose a weakly supervised learning based model in the absence of pairwise training examples, which needs only a collection of images with image-level annotations indicating the presence/absence of raindrops for training. Specifically, we train a raindrop detector for highlighting regions of raindrops in a multi-task learning manner. Then, we propose an attention-based generative network for raindrop removal and introduce a weighted preservation loss to retain the non-raindrop details. Specially, our model can be mixedly trained with pairwise and unpaired samples, which enables us to conveniently adapt the model to a new domain. Experiments verify the effect of the proposed method. Especially, using only weakly-supervised learning, our method can achieve comparable results with state-of-the-art strongly-supervised learning methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.