Abstract

Image denoising is an issue of intensive research in the image processing community. As the wave of deep learning advances, image denoising with convolutional neural networks has recently made drastic progress, however, they usually can not recover atypical details from noisy images. Motivated by the above, we propose a multiview texture-aware convolutional neural network named MVTANet, which comprises primary denoising network and multiview texture-aware modular. Proposed multiview texture-aware modular owns two variants, i.e., main and secondary texture-aware modular. First, the denoised image is obtained through the primary denoising network. Then, the denoised image and clean image are input into multiview texture-aware modular, respectively, and we obtain two sets of intermediate features and calculate the corresponding perceptual loss. This perceptual loss is designed to generate auxiliary supervision for tiny detail recovery. By setting different initial parameters and parameter freezing technology, this modular can be further focused on restoring atypical details. Extensive experiments demonstrate that the proposed MVTANet is superior to the state-of-the-art denoising methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call