Abstract

Supervised deep learning techniques have been widely explored in real photograph denoising and achieved noticeable performances. However, being subject to specific training data, most current image denoising algorithms can easily be restricted to certain noisy types and exhibit poor generalizability across testing sets. To address this issue, we propose a novel flexible and well-generalized approach, coined as dual meta attention network (DMANet). The DMANet is mainly composed of a cascade of the self-meta attention blocks (SMABs) and collaborative-meta attention blocks (CMABs). These two blocks have two forms of advantages. First, they simultaneously take both spatial and channel attention into account, allowing our model to better exploit more informative feature interdependencies. Second, the attention blocks are embedded with the meta-subnetwork, which is based on metalearning and supports dynamic weight generation. Such a scheme can provide a beneficial means for self and collaborative updating of the attention maps on-the-fly. Instead of directly stacking the SMABs and CMABs to form a deep network architecture, we further devise a three-stage learning framework, where different blocks are utilized for each feature extraction stage according to the individual characteristics of SMAB and CMAB. On five real datasets, we demonstrate the superiority of our approach against the state of the art. Unlike most existing image denoising algorithms, our DMANet not only possesses a good generalization capability but can also be flexibly used to cope with the unknown and complex real noises, making it highly competitive for practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call