Abstract

Nonlocal self-similarity (NSS) is one of the most commonly used priors in computer vision and image processing. It aims to make use of the fact that a natural image often possesses many repetitive local patterns, and thus a local image patch always has many similar patches across the image. Through compensatively integrating these similar image patches, their insightful patterns hiding under corrupted noises can be intrinsically extracted. However, for using this prior knowledge, current methods search the similar patches by using simple block matching strategy with Euclidean distance, which largely ignores those patches containing similar local patterns but with different texture-directions and colors. To more sufficiently explore similar patches over an image, in this paper, we propose two new representations for image patches, which facilitate an easy NSS prior for measuring direction-invariant and color-invariant nonlocal self-similarity possessed by image patches. Specifically, based on this prior term, we formulate the color image denoising problem as a concise Bayesian posterior estimation framework, and design an efficient expectation-maximization (EM) algorithm to solve it. A series of experiments implemented on simulated and real noisy color images demonstrate the superiority of the proposed method as compared with the state-of-the-arts both visually and quantitatively, verifying the potential usefulness of this new NSS prior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call