Abstract

The Support Vector Machines (SVM) classifier is a margin-based supervised machine learning method used for categorization and classification tasks. A Linear SVM classifier uses linear Kernels while a non-linear SVM classifier adopts non-linear Kernels. The Linear SVM classifier is considered an efficient technique, especially for big datasets of high dimensionality in various applications, such as document categorization, time-series classification, outliers’ detection, to name a few. It is much faster to train a linear SVM classifier than a non-linear SVM classifier. For large-scale datasets with various shapes, configurations, and distributions, the computational complexity of training a non-linear SVM classifier is continuously evolving. Current research methods have introduced various problem formulations, solvers, and strategies for speeding up the training process of a non-linear SVM classifier. However, solving a quadratic programming (QP) problem is still challenging, especially for “Big Data”, which poses a great challenge for traditional methods to train a non-linear classifier efficiently. In this paper, we are proposing a novel boosting algorithm to enhance the performance of weak non-linear SVM classifiers using the notion of incremental learning and decremental unlearning. Experimental results over artificial and real datasets with different sizes, shapes, and distributions show that the proposed ensemble boosting algorithm outperforms the individual SVM classifiers measured by the classification accuracy and the Speedup.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call