Abstract

The alternating direction method of multipliers (ADMM) is an algorithm for solving large-scale data optimization problems in machine learning. In order to reduce the communication delay in a distributed environment, asynchronous distributed ADMM (AD-ADMM) was proposed. However, due to the unbalance process arrival pattern existing in the multiprocessor cluster, the communication of the star structure used in AD-ADMM is inefficient. Moreover, the load in the entire cluster is unbalanced, resulting in a decrease of the data processing capacity. This paper proposes a hierarchical parameter server communication structure (HPS) and an asynchronous distributed ADMM (HAD-ADMM). The algorithm mitigates the unbalanced arrival problem through process grouping and scattered updating global variable, which basically achieves load balancing. Experiments show that the HAD-ADMM is highly efficient in a large-scale distributed environment and has no significant impact on convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call