Abstract

Sparse Bayesian learning and specifically relevance vector machines have received much attention as a means of achieving parsimonious representations of signals in the context of regression and classification. We provide a simplified derivation of this paradigm from a Bayesian evidence perspective and apply it to the problem of basis selection from overcomplete dictionaries. Furthermore, we prove that the stable fixed points of the resulting algorithm are necessarily sparse, providing a solid theoretical justification for adapting the methodology to basis selection tasks. We then include simulation studies comparing sparse Bayesian learning with basis pursuit and the more recent FOCUSS class of basis selection algorithms, empirically demonstrating superior performance in terms of average sparsity and success rate of recovering generative bases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call