Abstract

Recent advancements in deep learning techniques have pushed forward the frontiers of real photograph denoising. However, due to the inherent pooling operations in the spatial domain, current CNN-based denoisers are biased towards focusing on low-frequency representations, while discarding the high-frequency components. This will induce a problem for suboptimal visual quality as the image denoising tasks target completely eliminating the complex noises and recovering all fine-scale and salient information. In this work, we tackle this challenge from the frequency perspective and present a new solution pipeline, coined as frequency attention denoising network (FADNet). Our key idea is to build a learning-based frequency attention framework, where the feature correlations on a broader frequency spectrum can be fully characterized, thus enhancing the representational power of the network across multiple frequency channels. Based on this, we design a cascade of adaptive instance residual modules (AIRMs). In each AIRM, we first transform the spatial-domain features into the frequency space. Then, a learning-based frequency attention framework is devised to explore the feature inter-dependencies converted in the frequency domain. Besides this, we introduce an adaptive layer by leveraging the guidance of the estimated noise map and intermediate features to meet the challenges of model generalization in the noise discrepancy. The effectiveness of our method is demonstrated on several real camera benchmark datasets, with superior denoising performance, generalization capability, and efficiency versus the state-of-the-art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.