Support vector machine (SVM), as a supervised learning method, has different kinds of varieties with significant performance. In recent years, more research focused on nonparallel SVM, where twin SVM (TWSVM) is the typical one. In order to reduce the influence of outliers, more robust distance measurements are considered in these methods, but the discriminability of the models is neglected. In this article, we propose robust manifold twin bounded SVM (RMTBSVM), which considers both robustness and discriminability. Specifically, a novel norm, that is, capped L₁-norm, is used as the distance metric for robustness, and a robust manifold regularization is added to further improve the robustness and classification performance. In addition, we also use the kernel method to extend the proposed RMTBSVM for nonlinear classification. We introduce the optimization problems of the proposed model. Subsequently, effective algorithms for both linear and nonlinear cases are proposed and proved to be convergent. Moreover, the experiments are conducted to verify the effectiveness of our model. Compared with other methods under the SVM framework, the proposed RMTBSVM shows better classification accuracy and robustness.
Read full abstract