Abstract

Real photograph denoising is extremely challenging in low-level computer vision since the noise is sophisticated and cannot be fully modeled by explicit distributions. Although deep-learning techniques have been actively explored for this issue and achieved convincing results, most of the networks may cause vanishing or exploding gradients, and usually entail more time and memory to obtain a remarkable performance. This article overcomes these challenges and presents a novel network, namely, PID controller guide attention neural network (PAN-Net), taking advantage of both the proportional-integral-derivative (PID) controller and attention neural network for real photograph denoising. First, a PID-attention network (PID-AN) is built to learn and exploit discriminative image features. Meanwhile, we devise a dynamic learning scheme by linking the neural network and control action, which significantly improves the robustness and adaptability of PID-AN. Second, we explore both the residual structure and share-source skip connections to stack the PID-ANs. Such a framework provides a flexible way to feature residual learning, enabling us to facilitate the network training and boost the denoising performance. Extensive experiments show that our PAN-Net achieves superior denoising results against the state-of-the-art in terms of image quality and efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.