Abstract

Neural networks such as multilayer perceptrons and radial basis function networks have been very successful in a wide range of problems. In this paper we give a short introduction to some new developments related to support vector machines (SVM), a new class of kernel based techniques introduced within statistical learning theory and structural risk minimization. This new approach lends to solving convex optimization problems and also the model complexity follows from this solution. We especially focus on a least squares support vector machine formulation (LS-SVM) which enables to solve highly nonlinear and noisy black-box modelling problems, even in very high dimensional input spaces. While standard SVMs have been basically only applied to static problems like classification and function estimation, LS-SVM models have been extended to recurrent models and use in optimal control problems. Moreover, using weighted least squares and special pruning techniques, LS-SVMs can be employed for robust nonlinear estimation and sparse approximation. Applications of (LS)-SVMs to a large variety of artificial and real-life data sets indicate the huge potential of these methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call