Abstract
<p style='text-indent:20px;'>A support vector machine (SVM) is an algorithm that finds a hyperplane which optimally separates labeled data points in <inline-formula><tex-math id="M1">\begin{document}$ \mathbb{R}^n $\end{document}</tex-math></inline-formula> into positive and negative classes. The data points on the margin of this separating hyperplane are called <i>support vectors</i>. We connect the possible configurations of support vectors to Radon's theorem, which provides guarantees for when a set of points can be divided into two classes (positive and negative) whose convex hulls intersect. If the convex hulls of the positive and negative support vectors are projected onto a separating hyperplane, then the projections intersect if and only if the hyperplane is optimal. Further, with a particular type of general position, we show that (a) the projected convex hulls of the support vectors intersect in exactly one point, (b) the support vectors are stable under perturbation, (c) there are at most <inline-formula><tex-math id="M2">\begin{document}$ n+1 $\end{document}</tex-math></inline-formula> support vectors, and (d) every number of support vectors from 2 up to <inline-formula><tex-math id="M3">\begin{document}$ n+1 $\end{document}</tex-math></inline-formula> is possible. Finally, we perform computer simulations studying the expected number of support vectors, and their configurations, for randomly generated data. We observe that as the distance between classes of points increases for this type of randomly generated data, configurations with fewer support vectors become more likely.</p>
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have