Abstract

In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 < ν ≤ 1 and ε > 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(ν, ε) algorithms minimize ε-smooth versions of the ℓ ν 'norms'. We leverage EM theory to show that, for each 0 < ν ≤ 1, the limit points of the sequence of IRLS(ν, ε) iterates are stationary point of the ε-smooth ℓ ν 'norm' minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(ν, ε) algorithms is stable for each 0 < ν ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case ν = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for ν < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call