Abstract

The least-squares support vector machine (LS-SVM) is generally parameterized by a large number of support vectors, which slows down the speed of classification. This paper proposes to search for and prune two types of support vectors. The first type is the potential outliers, each of which is misclassified by the model trained on the other samples. The second type is the sample whose removal causes the least perturbation to the dual objective function. Without implicitly implementing the training procedure, the LS-SVM model pertaining to omission of a training sample is derived analytically from the LS-SVM trained on the whole training set. The derivation reduces the computational cost of pruning a sample, which makes the major technical contribution of this paper. Experimental results on six UCI datasets show that, compared with classical pruning methods, the proposed algorithm can enhance the sparsity of the LS-SVM significantly, while maintaining satisfactory generalization performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call