Abstract

Within the conventional sparse Bayesian learning (SBL) framework, only Gaussian scale mixtures have been adopted to model sparsity-inducing priors that guarantee the exact inverse recovery. In light of the relative scarcity of formal SBL tools in enforcing a proper sparsity profile of signal vectors, we explore the use of hierarchical synthesis lasso (HSL) priors for representing the same small subset of features among multiple responses. We outline a viable approximation to this particular choice of sparse prior, leading to tractable marginalization over all weights and hyperparameters. We then discuss how the statistical variables of the hierarchical Bayesian model can be estimated via an adaptive updating formula, and include a refined one dimensional searching procedure to extraordinarily improve the direction of arrival (DOA) estimation performance when take the off-grid DOAs into account. Using these modifications, we show that exploiting HSL priors are very helpful in encouraging sparseness. Numerical simulations also verify the superiority of the proposal in terms of convergence speed and root mean squared estimation error, as compared to the traditional and more recent sparse Bayesian algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.