Abstract

Support vector machine (SVM) is a widely used method for classification. Proximal support vector machine (PSVM) is an extension of SVM and a promising method to lead to a fast and simple algorithm for generating a classifier. Motivated by the fast computational efforts of PSVM and the properties of sparse solution yielded by $$\ell _{1}$$ -norm, in this paper, we first propose a PSVM with a cardinality constraint which is eventually relaxed by $$\ell _{1}$$ -norm and leads to a trade-off $$\ell _{1}-\ell _{2}$$ regularized sparse PSVM. Next we convert this $$\ell _{1}-\ell _{2}$$ regularized sparse PSVM into an equivalent form of $$\ell _{1}$$ regularized least squares (LS) and solve it by a specialized interior-point method proposed by Kim et al. (J Sel Top Signal Process 12:1932–4553, 2007). Finally, $$\ell _{1}-\ell _{2}$$ regularized sparse PSVM is illustrated by means of a real-world dataset taken from the University of California, Irvine Machine Learning Repository (UCI Repository). Moreover, we compare the numerical results with the existing models such as generalized eigenvalue proximal SVM (GEPSVM), PSVM, and SVM-Light. The numerical results show that the $$\ell _{1}-\ell _{2}$$ regularized sparse PSVM achieves not only better accuracy rate of classification than those of GEPSVM, PSVM, and SVM-Light, but also a sparser classifier compared with the $$\ell _{1}$$ -PSVM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call