Abstract

• Large scale nonlinear Support Vector Machines can be trained using Alternating Direction Method of Multipliers and Hierarchically-Semi- Separable kernel approximations. • Reduced computational times can be obtained without worsening the classification accuracy. • Grid search for the misclassification parameter selection is extremely efficient in this framework. • Our efficient C++ implementation shows clear advantages over LIBSVM. Typically, nonlinear Support Vector Machines (SVMs) produce significantly higher classification quality when compared to linear ones but, at the same time, their computational complexity is prohibitive for large-scale datasets: this drawback is essentially related to the necessity to store and manipulate large, dense and unstructured kernel matrices. Despite the fact that at the core of training an SVM there is a simple convex optimization problem, the presence of kernel matrices is responsible for dramatic performance reduction, making SVMs unworkably slow for large problems. Aiming [at] an efficient solution of large-scale nonlinear SVM problems, we propose the use of the Alternating Direction Method of Multipliers coupled with Hierarchically Semi-Separable (HSS) kernel approximations. As shown in this work, the detailed analysis of the interaction among their algorithmic components unveils a particularly efficient framework and indeed, the presented experimental results demonstrate[, in the case of Radial Basis Kernels,] a significant speed-up when compared to the state-of-the-art nonlinear SVM libraries (without significantly affecting the classification accuracy).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call