Abstract

Sequential minimal optimization (SMO) is widely used for training support vector machines (SVMs) because of fast training. But the training slows down when a large margin parameter value is used. Training by Newton’s method (NM) accelerates training in such a situation but it slows down for a small margin parameter value. To solve this problem, in this paper we fuse SMO with NM and call it SMO-NM. Because slow training is caused by repetitive corrections of the same variables, we modify the working set selection when they are detected. We call the variables that are selected by SMO, SMO variables. At the current step, if a variable selected as an SMO variable was selected in a previous step, we consider that a loop is detected. And in addition to the SMO variables, we add, to the working set, the unbounded variables that were selected as SMO variables and correct the variables by NM. If no loop is detected, the training procedure is the same as that of SMO. As a variant of this working set strategy, we further add violating variables to the working set. We clarify that if the classification problem is not linearly separable in the feature space, the solutions of L1/L2 SVMs (with the linear sum/square sum of slack variables) are unbounded as the margin parameter value approaches infinity but that, if the mapped training data are not linearly independent in the feature space, the solution of the least squares SVM is unbounded as the margin parameter approaches infinity. We also clarify the condition, in which the increment of the objective function value by SMO-NM is larger than that by SMO. We evaluate SMO-NM for several benchmark data sets and confirm the effectiveness over SMO especially for a large margin parameter value.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call