Abstract
Support vector machines (SVMs), with their roots in Statistical Learning Theory (SLT) and optimization methods, have become powerful tools for problem solution in machine learning. SVMs reduce most machine learning problems to optimization problems and optimization lies at the heart of SVMs. Lots of SVM algorithms involve solving not only convex problems, such as linear programming, quadratic programming, second order cone programming, semi-definite programming, but also non-convex and more general optimization problems, such as integer programming, semi-infinite programming, bi-level programming and so on. The purpose of this paper is to understand SVM from the optimization point of view, review several representative optimization models in SVMs, their applications in economics, in order to promote the research interests in both optimization-based SVMs theory and economics applications. This paper starts with summarizing and explaining the nature of SVMs. It then proceeds to discuss optimization models for SVM following three major themes. First, least squares SVM, twin SVM, AUC Maximizing SVM, and fuzzy SVM are discussed for standard problems. Second, support vector ordinal machine, semisupervised SVM, Universum SVM, robust SVM, knowledge based SVM and multi-instance SVM are then presented for nonstandard problems. Third, we explore other important issues such as lp-norm SVM for feature selection, LOOSVM based on minimizing LOO error bound, probabilistic outputs for SVM, and rule extraction from SVM. At last, several applications of SVMs to financial forecasting, bankruptcy prediction, credit risk analysis are introduced.
Highlights
Support vector machines (SVMs), which were introduced by Vapnik and his coworkers in the early 1990’s (Cortes, Vapnik 1995; Vapnik 1996, 1998), are proved to be effective and promising techniques for data mining (Peng et al 2008; Yang, Wu 2006)
SVMs, unlike traditional methods (e.g. Neural Networks), having their roots in Statistical Learning Theory (SLT) and optimization methods, become powerful tools to solve the problems of machine learning with finite training points and overcome some traditional difficulties such as the “curse of dimensionality”, “over-fitting” and etc
SVMs reduce most machine learning problems to optimization problems, optimization lies at the heart of SVMs, especially the convex optimization problem plays an important role in SVMs
Summary
Support vector machines (SVMs), which were introduced by Vapnik and his coworkers in the early 1990’s (Cortes, Vapnik 1995; Vapnik 1996, 1998), are proved to be effective and promising techniques for data mining (Peng et al 2008; Yang, Wu 2006). SVMs’ theoretical foundation and implementation techniques have been established and SVMs are gaining quick development and popularity due to a number of their attractive features: nice mathematical representations, geometrical explanations, good generalization abilities and promising empirical performance (Cristianini, Shawe-Taylor 2000; Deng, Tian 2004, 2009; Deng et al 2012; Herbrich 2002; Schölkopf, Smola 2002).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.