Abstract

Multiresolution image transformations have become commonplace tools for analyzing images for a number of applications, including enhancement and restoration. Powerful models for these applications have been developed in the transform domain of multiresolution operators, taking advantage of the joint resolution of the transformed image in space and spatial frequency. The rich mathematical language of Bayesian statistics enables the formulation of complex and expressive models for capturing structure of various kinds in the transformed representations of images. In this thesis, we introduce several novel models for a set of related applications in multiresolution image processing. We develop a principled approach to multiscale dictionary learning in the transform domain of the Laplacian pyramid using the tools of sparse factor analysis to model the covariance structure of transformed images, and we extend this model to create a Markov-dependent dictionary learning model that incorporates known relationships among image structures across scales. Utilizing this multiscale model and related single-scale models, we develop a framework for capturing non-stationary noise characteristics in images using Gaussian mixture models. This extension enables us to describe and account for noise characteristics commonly observed in images. The tools of nonparametric Bayesian statistical methods allow these noise characteristics to be captured by models with arbitrary complexity. We apply our novel approach to a variety of image processing problems, including measuring and localizing multiple noise sources in images for forensic applications and denoising real-world color images corrupted with non-stationary and non-Gaussian noise sources. Finally, we propose a novel measure of image complexity based on joint statistical modeling of image transform coefficients. We demonstrate how this intuitive approach may be applied to a variety of image restoration and enhancement problems. Of primary interest is our demonstration of how our complexity model can be used as an empirically estimated prior distribution in transform-based denoising algorithms. We further show how the complexity model itself can be used as a coefficient shrinkage'' operator for denoising applications, and we derive properties of this estimator. Critically, our model removes the dependence of denoising algorithms on accurate and independent estimates of important model parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.