Abstract
Principal component analysis (PCA) approximates a data matrix with a low-rank one by imposing sparsity on its singular values. Its robust variant can cope with spiky noise by introducing an element-wise sparse term. In this paper, we extend such sparse matrix learning methods, and propose a novel framework called sparse additive matrix factoriza- tion (SAMF). SAMF systematically induces various types of sparsity by a Bayesian regu- larization effect, called model-induced regularization. Although group LASSO also allows us to design arbitrary types of sparsity on a matrix, SAMF, which is based on the Bayesian framework, provides inference without any requirement for manual parameter tuning. We propose an efficient iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large sub- set of parameters in each step. We demonstrate the usefulness of our method on benchmark datasets and a foreground/background video separation problem.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have