Abstract

How to deal with the newly added training samples, and utilize the result of the previous training effectively to get better classification result fast is the main task of incremental learning. To utilize the result of the previous training and retain the useful information in the training set effectively, the relationship between the Karush-Kuhn-Tucker (KKT) conditions and the influence of the newly added samples on the previous support vector set is analyzed, and the constitution of the of the new training sample set in the incremental learning is given. By choosing the most important samples for the incremental learning to reduce the computational cost of the SVM incremental training, a fast SVM incremental learning algorithm is proposed in this paper. Experimental results prove that the given algorithm has better classification performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call