Abstract

As a classical machine learning model, support vector machine (SVM) has attracted much attention due to its rigorous theoretical foundation and powerful discriminative performance. The doubly regularized SVM (DRSVM) is an important variant of SVM based on elastic-net regularization, which considers both the sparsity and stability of the model. To tackle the problems of explosive increases in data dimensions and data volume, the alternating direction method of multipliers (ADMM) algorithm can be used to train the DRSVM model. ADMM is an effective iterative algorithm for solving convex optimization problems by decomposing a large issue into a series of solvable subproblems, which is also well suited for distributed computing. However, lack of guaranteed convergence and slow convergence rate are two critical limitations of ADMM. In this paper, a 3-block ADMM algorithm based on the over-relaxation technique is proposed to accelerate DRSVM training, namely, the over-relaxed DRSVM (O-RDRSVM). The main strategy of the over-relaxation technique is to further append the information from the previous iteration to the next iteration to improve the convergence of ADMM. We also propose a distributed version of O-RDRSVM to handle parallel and distributed computing faster, termed DO-RDRSVM. Moreover, we develop a fast O-RDRSVM algorithm (FO-RDRSVM) and a fast DO-RDRSVM algorithm (FDO-RDRSVM), which further reduce the computational cost of O-RDRSVM and DO-RDRSVM by employing the matrix inversion lemma. The convergence analyses ensure the effectiveness of our algorithms for DRSVM training. Finally, extensive experiments on public datasets demonstrate the advantages of our algorithms in terms of convergence rate and training time while maintaining accuracy and sparsity comparable to those of previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call