Abstract

The optimal separating hyperplane of a typical Least Squares Support Vector Machine (LS-SVM) is constructed using most of the training samples. A consequent disadvantage is the slowdown of the LS-SVM classification process on the test samples. Previous methods address this issue by simplifying the decision rule established after training, which risks a loss in generalization ability and imposes extra computation cost. This paper presents a novel optimal sparse LS-SVM whose decision rule is parameterized by the optimal set of training examples, in addition to having an optimal generalization capability. For a large number of classiffcation problems, the new LS-SVM requires a significantly reduced number of training samples, a property referred to as the sparseness of the solution. The training of the LS-SVM method is implemented using a modified two-stage regression algorithm. Experiments on two-spiral data confirms the advantages described. Simulation results on checkerboard data further illustrate that the proposed LS-SVM can effectively produce an optimal hyperplane which is sparse in training examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call