Abstract

Sparse Bayesian Learning (SBL) is an efficient and well-studied framework for sparse signal recovery. SBL relies on a parameterized prior on the sparse signal to be estimated. The prior is chosen (with estimated hyperparameters) such that it encourages sparsity in the representation of the signal. However, SBL doesn’t scale with problem dimensions due to the computational complexity associated with matrix inversion. To address this issue, there exists low complexity methods based on approximate Bayesian inference. Various state of the art approximate inference methods are based on variational Bayesian (VB) inference or message passing algorithms such as belief propagation (BP) or expectation propagation. Moreover, these approximate inference methods can be unified under the op-timization of Bethe free energy with appropriate constraints. SBL allows to treat more general signal models by the use of hierarchical prior formulation which eventually becomes more sparsity inducing than e.g., Laplacian prior. In this paper, we study the convergence behaviour of the mean and variance of the unknown parameters in SBL under approximate Bayesian inference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call