Abstract
Incremental learning has attracted more and more attention recently, both in theory and application. In this paper, the incremental learning algorithms for Lagrangian support vector machine (LSVM) are proposed. LSVM is an improvement to the standard linear SVM for classifications, which leads to the minimization of an unconstrained differentiable convex programming. The solution to this programming is obtained by an iteration scheme with a simple linear convergence. The inversion of the matrix in the solving algorithm is converted to the order of the original input space’s dimensionality plus one at the beginning of the algorithm. The algorithm uses the Sherman–Morrison–Woodbury identity to reduce the computation time. The incremental learning algorithms for LSVM presented in this paper include two cases that are namely online and batch incremental learning. Because the inversion of the matrix after increment is solved based on the previous computed information, it is unnecessary to repeat the computing process. Experimental results show that the algorithms are superior to others.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have