Abstract

The Alternating Direction Multiplier Method (ADMM) is an algorithm for solving large-scale machine learning optimization problems. In order to reduce the communication delay in a distributed environment, asynchronous distributed ADMM (AD-ADMM) was proposed. However, low convergence speed and the master-slave communication structure greatly reduce the performance of AD-ADMM in multi-core cluster. In this paper, we propose a new method to speed up the convergence of ADMM. The proposed method accelerates the convergence speed by automatically deciding the optimization algorithms needed for the solution of ADMM subproblem in each iteration. The dynamic scheduling strategy chooses the optimization algorithm mainly based on the primal residual and the dual residual. In addition, we also combine the hierarchical communication structure with the strategy to guarantee the efficiency. Experiments in the ZiQiang 4000 cluster experimental environment show that the dynamic scheduling strategy based on hierarchical communication structure for solving ADMM subproblem improves the convergence speed and the communication efficiency of the algorithm efficiently.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call