Abstract

Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones, and vehicles. Due to the limitations of communication costs and security requirements, it is of paramount importance to analyze information in a decentralized manner instead of aggregating data to a fusion center. To train large-scale machine learning models, edge/fog computing is often leveraged as an alternative to centralized learning. We consider the problem of learning model parameters in a multiagent system with data locally processed via distributed edge nodes. A class of minibatch stochastic alternating direction method of multipliers (ADMMs) algorithms is explored to develop the distributed learning model. To address two main critical challenges in distributed learning systems, i.e., communication bottleneck and straggler nodes (nodes with slow responses), error-control-coding-based stochastic incremental ADMM is investigated. Given an appropriate minibatch size, we show that the minibatch stochastic ADMM-based method converges in a rate of O(1/√k), where k denotes the number of iterations. Through numerical experiments, it is revealed that the proposed algorithm is communication efficient, rapidly responding, and robust in the presence of straggler nodes compared with state-of-the-art algorithms.

Highlights

  • 15 requirements, are often collected and stored on multiple heteroge- can be formulated in the following form, among which N neous devices, such as mobile devices, drones and vehicles

  • We extend our preliminary work in [34] and investigate or a random order whilst keeping all other agents and links the possibility of coding for stochastic incremental alternating direction method of multipliers (ADMM)

  • We note that Theorem 1 provides a sufficient condition to under the same conditions as those in Theorem 2, with mean guarantee the convergence of the proposed Stochastic I-ADMM (sI-ADMM). csI- deviation defined by ADMM has the same convergence properties as those of sI

Read more

Summary

INTRODUCTION

Quantization and local computation along with error compen- To provide tolerance to link failures and straggler nodes sation In these methods, accuracy is sacrificed to for the edge computing with ADMM, we present the achieve lower communication costs [21]. Z5 path of token agent 5 efficient and straggler-tolerant decentralized algorithm such that the agents can collaboratively find an optimal solution agent 1 agent link z4 through local computations and limited information exchange z1 agent 2. Approximation fitting with first-order controls the other agents as the master server; and 3) one- Taylor approximation and mini-batch stochastic optimization hop communication, each agent only exchanges global model will be proposed to approximate such non-linear functions parameter information with directly connected neighboring and to give fast computation for x−update.

Edge Computing for Mini-Batch Stochastic I-ADMM
Coding Schemes for sI-ADMM
ALGORITHM ANALYSES
Convergence Analysis
Communication Analysis
Simulation Results
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.