Abstract

In this paper, we propose a novel large margin classifier, called the maxi-min margin machine M(4). This model learns the decision boundary both locally and globally. In comparison, other large margin classifiers construct separating hyperplanes only either locally or globally. For example, a state-of-the-art large margin classifier, the support vector machine (SVM), considers data only locally, while another significant model, the minimax probability machine (MPM), focuses on building the decision hyperplane exclusively based on the global information. As a major contribution, we show that SVM yields the same solution as M(4) when data satisfy certain conditions, and MPM can be regarded as a relaxation model of M(4). Moreover, based on our proposed local and global view of data, another popular model, the linear discriminant analysis, can easily be interpreted and extended as well. We describe the M(4) model definition, provide a geometrical interpretation, present theoretical justifications, and propose a practical sequential conic programming method to solve the optimization problem. We also show how to exploit Mercer kernels to extend M(4) for nonlinear classifications. Furthermore, we perform a series of evaluations on both synthetic data sets and real-world benchmark data sets. Comparison with SVM and MPM demonstrates the advantages of our new model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call