Abstract
Motivated by the advance of deep learning methods, deep unfolding methods such as deep convolutional dictionary learning have achieved great success in image denoising tasks. The main advantages are inheriting both the merits of deep learning (strong learning capacity) and traditional machine learning (powerful interpretable capacity). We observe that the update of dictionaries and coefficients is highly correlated with the previous iterative stage information for deep unfolding-based methods. However, most existing deep convolutional dictionary learning methods deal with each iteration step individually, ignoring the inner-memory within the stage and cross-memory across the stages. To alleviate these issues, we propose a dynamic inner-cross memory augmented attentional dictionary learning (M2ADL) network with attention guided residual connection module, which utilizes the previous important stage features such that better uncovering the inner-cross information. Specifically, the proposed inner-cross memory fully utilizes the previous stage’s hidden and last-layer information to learn the dictionary. In addition, we develop a dual attention-guided residual connection module to well exploit the deep feature learning ability to capture the spatial-spectral attention across the deep tensor-based features. Considerable experiments on both synthetic and real image datasets demonstrate the superiority of the proposed method over other state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.