Abstract

The paper presents a new multi-class least-squares support vector machine (LS-SVM) whose solution is sparse in the weight coefficient of support vectors. The solution of a binary LS-SVM support vector machine (LS-SVM) is constructed from most of the training samples, which is referred to as the non-sparseness problem. Multi-class LS-SVMs, which are learnt on the basis of binary classifiers inevitably share the same problem of the slowdown of the resulting LS-SVM classification on test examples. This paper addresses this issue by presenting a variant of the binary LS-SVM, in which the spareness of the solution is greatly improved. A new sparse multi-class SVM is developed from the binary case. The training of the LS-SVM method is implemented using an adapted two-stage regression algorithm. Experiments on synthetic data show that the novel multi-class LS-SVM reduces the number of weights parameters with which the resultant optimal hyperplane is spanned, while maintaining competitive generalization capacity compared with conventional LS-SVM classifiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call