Abstract

The support vector machine (SVM) model is one of the most well-known machine learning models, which is based on the structural risk minimization (SRM) principle. The SRM principle, formulated by Vapnik in a statistical learning theory framework, can be naturally expressed as an Ivanov regularization-based SVM (I-SVM). Recent advances in learning theory clearly show that I-SVM allows a more effective control of the learning hypothesis space with a better generalization ability. In this paper, we propose a new method for optimizing the I-SVM to find the optimal separation hyperplane. The proposed approach provides a parallel block minimization framework for solving the dual I-SVM problem that exploits the advantages of the randomized primal–dual coordinate (RPDC) method, and every iteration-based sub-optimization RPDC routine has a simple closed-form. We also provide an upper limit τ∗ for the space control parameter τ by solving a Morozov regularization SVM (M-SVM) problem. Experimental results confirmed the improved performance of our method for general I-SVM learning problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call