Abstract
Minimax probability machine (MPM) is an excellent discriminant classifier based on prior knowledge. It can directly estimate a probability accuracy bound by minimizing the maximum probability of misclassification. However, the traditional MPM learns only one hyperplane to separate different classes in the feature space, and it may bear a heavy computational burden during the training process because it needs to address a large-scale second-order cone programming (SOCP)-type problem. In this work, we propose a novel twin minimax probability extreme learning machine (TMPELM) for pattern classification. TMPELM indirectly determines the separation hyperplane by solving a pair of smaller-sized SOCP-type problems to generate a pair of non-parallel hyperplanes. Specifically, for each hyperplane, TMPELM attempts to maximize the probability of correct classification for one class sample points, that is, to minimize the worst-case (maximum) probability of misclassification of a class sample points, and is far away from the other class. TMPELM first utilizes the random feature mapping mechanism to construct the feature space, and then two nonparallel separating hyperplanes are learned for the final classification. The proposed TMPELM not only utilizes the geometric information of the samples, but also takes advantage of the statistical information (mean and covariance of the samples). Moreover, we extend a linear model of TMPELM to a nonlinear model by exploiting kernelization techniques. We also analyzed the computational complexity of TMPELM. Experimental results on both NIR spectroscopy datasets and benchmark datasets demonstrate the effectiveness of TMPELM.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have