Abstract

Within the conventional sparse Bayesian learning (SBL) framework, only Gaussian scale mixtures have been adopted to model sparsity-inducing priors that guarantee the exact inverse recovery. In light of the relative scarcity of formal SBL tools in enforcing a proper sparsity profile of signal vectors, we explore the use of hierarchical synthesis lasso (HSL) priors for representing the same small subset of features among multiple responses. We outline a viable approximation to this particular choice of sparse prior, leading to tractable marginalization over all weights and hyperparameters. We then discuss how the statistical variables of the hierarchical Bayesian model can be estimated via an adaptive updating formula, and include a refined one dimensional searching procedure to extraordinarily improve the direction of arrival (DOA) estimation performance when take the off-grid DOAs into account. Using these modifications, we show that exploiting HSL priors are very helpful in encouraging sparseness. Numerical simulations also verify the superiority of the proposal in terms of convergence speed and root mean squared estimation error, as compared to the traditional and more recent sparse Bayesian algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call