Support vector machine (SVM) is widely utilized for classification in diverse fields thanks to its superior performance. However, when tackling large-scale SVM problems, it encounters high computational complexity. To address this issue, we develop a novel trimmed concave loss SVM model called Ltri-SVM, which achieves sparsity and robustness simultaneously. The new optimality theory for the nonconvex and nonsmooth Ltri-SVM is developed by our new constructed proximal stationary point. Based on which, a new fast alternating direction method of multipliers (ADMM) with working set and low computational complexity is established for addressing Ltri-SVM. Based on our new established optimality theory, our proposed algorithm will divides the training dataset into two distinct categories: the non-working and the working sets. In every training iteration, the algorithm modifies the parameters associated with the working set, while ensuring that the parameters belonging to the non-working set remain unchanged. This algorithm enables the updating of parameters on smaller datasets, which in turn decreases computational complexity and improves the efficiency of runtime. Further, we prove that our algorithm is global convergence. Numerical experiments have confirmed the excellent performance of our algorithm in terms of accuracy in classification, number of support vector and computational speed, surpassing nine other leading solvers. For instance, when solving the real dataset with over 107 samples, our algorithm is able to complete the task in just 18.92 s, which is much faster than other solvers that take at least 650.4 s.
Read full abstract