Abstract

Although Relevance Vector Machine (RVM) is the most popular algorithms in machine learning and computer vision, outliers in the training data make the estimation unreliable. In the paper, a robust RVM model under non-parametric Bayesian framework is proposed. We decompose the noise term in the RVM model into two components, a Gaussian noise term and a spiky noise term. Therefore the observed data is assumed represented as: y= Dw+s+e where Dw is the relevance vector component, of which D is the kernel function matrix and w is the weight matrix, s is the spiky term and e is the Gaussian noise term. A spike-slab sparse prior is imposed on the weight vector, w which gives a more intuitive constraint on the sparsity than the Student’s t-distribution described in the traditional RVM. For the spiky component, s a spike-slab sparse prior is also introduced to recognize outliers in the training data effectively. Several experiments demonstrate the better performance over the RVM regression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call