Abstract

Image denoising is a longstanding research topic in low-level visions. The model-based methods mainly depend on certain handcrafted prior terms to regularize the image denoising problem. However, designing a prior with significantly higher denoising performance than the existing priors is challenging, and reconstructing a clean image via the traditional iterative framework is also time-consuming. Recently, deep neural networks-based denoising methods have achieved tremendous advances. Despite their good denoising performance, some of current architectures face weak interpretability to some extent as their network design is empirical. In this paper, we propose a novel deep neural network for the denoising task, which is designed by following the optimization process of a model-based denoising method. Specifically, by incorporating a novel global non-linear smoothness constraint prior term into a maximum a posteriori (MAP)-based cost function, a model-based denoising method can be obtained. After that, inspired by the powerful modelling ability of deep learning techniques, we exploit the proposed denoising method to inform the network design, leading to a novel end-to-end trainable and interpretable deep network, called GNSCNet. In GNSCNet, each network module corresponds to the processing step of the proposed model-based method. Furthermore, to boost the performance, GNSCNet performs denoising in a high-dimensional transforms space (i.e., feature domain). Experimental results demonstrate that the proposed network is superior to other state-of-the-art denoising methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.