Abstract

Support vector machines (SVMs) are classic binary classification algorithms and have been shown to be a robust and well-behaved technique for classification in many real-world problems. However, there are ambiguities in the basic concepts of SVMs although these ambiguities do not affect the effectiveness of SVMs. Corinna Cortes and Vladimir Vapnik, who presented SVMs in 1995, pointed out that an SVM predicts through a hyperplane with a maximal margin. However existing literatures have two different definitions of the margin. On the other hand, Corinna Cortes and Vladimir Vapnik converted an SVM into an optimization problem that is much easier to solve. Nevertheless, existing papers do not explain how the optimization problem derives from an SVM well. These ambiguities may cause certain troubles in understanding the basic concepts of SVMs. For this purpose, this paper defines a separating hyperplane of a training data set and, hence, an optimal separating hyperplane of the set. The two definitions are reasonable since this paper proves that w0Tx+b0=0 is an optimal separating hyperplane of a training data set when w0 and b0 constitute a solution to the above optimization problem. Some notes on the above margin and optimization problem are given based on the two definitions. These notes should be meaningful for clarifying the basic concepts of SVMs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.