Abstract

The minimal complexity machine (MCM) minimizes the maximum distance between training data and the separating hyperplane and is shown to generalize better than the conventional support vector machine. In this paper, we analyze the MCM and clarify the conditions that the solution of MCM is nonunique and unbounded. To resolve the unboundedness, we propose the minimal complexity linear programming support vector machine (MLP SVM), in which the minimization of the maximum distance between training data and the separating hyperplane is added to the linear programming support vector machine (LP SVM). By computer experiments we show that the solution of the MCM is unbounded under some conditions and that the MLP SVM generalizes better than the LP SVM for most of the two-class and multiclass problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call