Abstract

In order to overcome many inherent defects of support vector machine (SVM), for example, the kernel function must satisfy the Mercer condition, relevance vector machine (RVM) was proposed to avoid these shortcomings of SVM This study concerns with the performance of RVM and SVM for regression and classification problem. Because RVM is based on Bayesian framework, a priori knowledge of the penalty term is introduced, the RVM needless relevance vectors (RVs) (support vectors (SVs) in SVM) but better generalization ability than SVM. In this paper, Sparse Bayesian learning (SBL) is firstly introduced and then RVM regression and classification models which based on SBL are introduced secondly, and then by inference the parameters, the RVM learning is transform into maximize the marginal likelihood function estimation, and give three kinds of commonly used estimation methods. Finally, we do some simulation experiments to show that the RVM has less RVs or SVs but better generalization ability than SVM whether regression or classification case, and also show that different kernel functions will impact the performance of RVM. However, there does not exist the performance of a kernel function is much better than other kernel functions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.