Abstract

1-norm support vector machine (SVM) has attracted substantial attentions for its good sparsity. However, the computational complexity of training 1-norm SVM is about the cube of the sample number, which is high. This paper replaces the hinge loss or the e-insensitive loss by the squared loss in the 1-norm SVM, and applies orthogonal matching pursuit (OMP) to approximate the solution of the 1-norm SVM with the squared loss. Experimental results on toy and real-world datasets show that OMP can faster train 1-norm SVM and achieve similar learning performance compared with some methods available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call