Abstract

A deep texture adaptive denoising method is proposed to achieve high perceptual image quality. Textual information is learned through a designed loss function utilizing a pre-generated texture map to distinguish textual areas from flat areas. In the training process, the proposed network internally finds texture and flat regions and differs in denoising strength in the two regions. Unlike existing DNN-based denoising methods, the proposed method retains high-frequency textual information while removing residual noise in flat regions as much as possible. The gradient distribution of the image before and after the denoising was compared. The proposed method outperformed the existing methods with higher PSNR and SSIM scores in visual quality. In addition, the strength of removing textual noise was controllable with a single parameter. Thus, the proposed method is practically feasible as a denoising apparatus.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call