Abstract

Support vector machines (SVMs) with the $\ell _{1}$ -penalty became a standard tool in the analysis of high-dimensional classification problems with sparsity constraints in many applications, including bioinformatics and signal processing. We give non-asymptotic results on the performance of $\ell _{1}$ -SVM in identification of sparse classifiers. We show that an $N$ -dimensional $s$ -sparse classification vector can be (with high probability) well approximated from only $O(s\log (N))$ Gaussian trials. We derive similar estimates also in the presence of misclassifications and for the so-called doubly regularized SVM, which combines the $\ell _{1}$ - and the $\ell _{2}$ -penalty. Similar bounds were obtained earlier in the analysis of LASSO and 1-Bit compressed sensing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call