Abstract

The availability of new data in previously trained Machine Learning (ML) models usually requires retraining and adjustment of the model. Support Vector Machines (SVMs) are widely used in ML because of their strong mathematical foundations and flexibility. However, SVM training is computationally expensive, both in time and memory. Hence, the training phase might be a limitation in problems where the model is updated regularly. As a solution, new methods for training and updating SVMs have been proposed in the past. In this paper, we introduce the concept of Support Subset and a new retraining methodology for SVMs. A Support Subset is a subset of the training set, such that retraining a ML model with this subset and the new data is equivalent to training with all the data. The performance of the proposal is evaluated in a variety of experiments on simulated and real datasets in terms of time, quality of the solution, resultant support vectors, and amount of employed data. The promising results provide a new research line for improving the effectiveness and adaptability of the proposed technique, including its generalization to other ML models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call