Abstract

A fundamental problem in signal processing is to estimate signal from noisy observations. This is usually formulated as an optimization problem. Optimizations based on variational lower bound and minorization-maximization have been widely used in machine learning research, signal processing, and statistics. In this paper, we study iterative algorithms based on the conjugate function lower bound (CFLB) and minorization-maximization (MM) for a class of objective functions. We propose a generalized version of these two algorithms and show that they are equivalent when the objective function is convex and differentiable. We then develop a CFLB/MM algorithm for solving the MAP estimation problems under a linear Gaussian observation model. We modify this algorithm for wavelet-domain image denoising. Experimental results show that using a single wavelet representation the performance of the proposed algorithms makes better than that of the bishrinkage algorithm which is arguably one of the best in recent publications. Using complex wavelet representations, the performance of the proposed algorithm is very competitive with that of the state-of-the-art algorithms.

Highlights

  • Estimating signal from noisy observations is a fundamental task in signal processing

  • For the iterative generalized Wiener estimate (IGWE) algorithm, since the power exponential and a number of scale mixture of Gaussian (SMG) distributions have been studied [11, 16, 48], we focus on the student-t and slash distributions which have not been widely applied to denoising problem

  • We have studied conjugate function lower bound (CFLB)/MM algorithms for a special class of objective functions that are convex through a suitable mapping of variable

Read more

Summary

Introduction

Estimating signal from noisy observations is a fundamental task in signal processing. Where x is a column vector of the true signal, y and e are vectors of observations and noise, respectively. When noise is assumed independent and identically distributed (i.i.d.) Gaussian, the maximum likelihood (ML) estimation is a typical least-squares problem [1]. When the distribution of the noise is assumed heavytailed, the ML estimation is a robust regression problem [2]. When noise is i.i.d. Gaussian, the maximum a posteriori (MAP) estimation problem is essentially a penalized leastsquares problem. The problem is known as a ridge-regression [3] or weight-decay [4] problem when the prior for x is i.i.d. Gaussian. A typical application that exploits the sparseness of the signal is in wavelet-based image denoising [9,10,11]

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.