Abstract

Sparse Bayesian learning is a powerful framework for expressing compressed sensing problems in the language of Bayesian inference, but its principal algorithm—the variational relevance vector machine—is matrix-bound and scales poorly to larger problem sizes. Recent approaches to construct more computationally efficient methods of variational inference in the sparse Bayesian learning framework either scale poorly with the problem size or fail to minimize the correct variational objective. This work demonstrates an efficient matrix-free algorithm for sparse Bayesian learning that scales well to large problems, converges at a rate competetive with existing fast methods, and provably descends on the correct mean-field objective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call