Abstract

When running in Parameter Server (PS), the Distributed Stochastic Gradient Descent (D-SGD) incurs significant communication delays and huge communication overhead due to the model synchronization. Moreover, considering the heterogeneity of computational capability among workers, traditional synchronization modes incur under-utilization of computational resources because fast workers have to wait for slow ones finishing the computation. Although our previous work OSP can effectively solve these problems by overlapping the computation and communication procedures and allowing adaptive multiple local updates in distributed training, it causes the staleness problem brought by the overlap, yielding a performance degradation. In this paper, we propose a new method named LOSP by introducing local compensation to our previous synchronization mechanism, which mitigates adverse effects caused by the overlapping synchronization. We theoretically prove that LOSP (1) preserves the same convergence rate as the sequential SGD for non-convex problems, and (2) exhibits good scalability due to the linear speedup property with respect to both the number of workers and the average number of local updates. Evaluations show that LOSP significantly improves performance over the state-of-the-art ones in terms of both convergence accuracy and communication cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.