Abstract

Gradient descent strategy, viewed as an important model optimization method, has been widely used for various tasks (such as model-based image denoising) of computer vision. In the gradient descent denoising model, the learning rate (LR) and residual component are two important parts to be adaptively estimated for its stable point. This letter proposes a deep gradient descent network (DGDNet), including two key points: one is that the LR is designed with eigenvalues of Hessian matrix of remotely sensed images (RSIs) and their local weighted factor (LWF), which can recognize structures from RSIs degraded by additive white Gaussian noise (AWGN). The other is that the residual part is calculated by an U-shaped network (USNet) to speed up the DGDNet convergent to a fixed point. Finally, the two components are plugged into the gradient descent scheme and contribute to an enjoyable result with a few iterations. Quantitatively and qualitatively experimental results demonstrate that the proposed DGDNet can obtain a stable solution efficiently, and produce competitive denoising performance which is even better than that yielded by the state-of-the-art noise reduction methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.