Abstract

As a semi-supervised learning method, Laplacian support vector machine (LapSVM) is popular. Unfortunately, the model generated by LapSVM has a poor sparsity. A sparse decision model has always been fascinating because it could implement data reduction and improve performance. To generate a sparse model of LapSVM, we propose an $$\ell _1$$ -norm Laplacian support vector machine ( $$\ell _1$$ -norm LapSVM), which replaces the $$\ell _2$$ -norm with the $$\ell _1$$ -norm in LapSVM. The $$\ell _1$$ -norm LapSVM has two techniques that can induce sparsity: the $$\ell _1$$ -norm regularization and the hinge loss function. We discuss two situations for the $$\ell _1$$ -norm LapSVM, linear and nonlinear ones. In the linear $$\ell _1$$ -norm LapSVM, the sparse decision model implies that features with nonzero coefficients are contributive. In other words, the linear $$\ell _1$$ -norm LapSVM can perform feature selection to achieve the goal of data reduction. Moreover, the nonlinear (kernel) $$\ell _1$$ -norm LapSVM can also implement data reduction in terms of sample selection. In addition, the optimization problem of the $$\ell _1$$ -norm LapSVM is a convex quadratic programming one. That is, the $$\ell _1$$ -norm LapSVM has a unique and global solution. Experimental results on semi-supervised classification tasks have shown a comparable performance of our $$\ell _1$$ -norm LapSVM.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.