Abstract

Probabilistic classification vector machine (PCVM) is a sparse learning approach aiming to address the stability problems of relevance vector machine for classification problems. Because PCVM is based on the expectation maximization algorithm, it suffers from sensitivity to initialization, convergence to local minima, and the limitation of Bayesian estimation making only point estimates. Another disadvantage is that PCVM was not efficient for large data sets. To address these problems, this paper proposes an efficient PCVM (EPCVM) by sequentially adding or deleting basis functions according to the marginal likelihood maximization for efficient training. Because of the truncated prior used in EPCVM, two approximation techniques, i.e., Laplace approximation and expectation propagation (EP), have been used to implement EPCVM to obtain full Bayesian solutions. We have verified Laplace approximation and EP with a hybrid Monte Carlo approach. The generalization performance and computational effectiveness of EPCVM are extensively evaluated. Theoretical discussions using Rademacher complexity reveal the relationship between the sparsity and the generalization bound of EPCVM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call