Abstract

Decentralized machine learning has been playing an essential role in improving training efficiency. It has been applied in many real‐world scenarios, such as edge computing and IoT. However, in fact, networks are dynamic, and there is a risk of information leaking during the communication process. To address this problem, we propose a decentralized parallel stochastic gradient descent algorithm (D‐(DP)2SGD) with differential privacy in dynamic networks. With rigorous analysis, we show that D‐(DP)2SGD converges with a rate of while satisfying ε‐DP, which achieves almost the same convergence rate as previous works without privacy concern. To the best of our knowledge, our algorithm is the first known decentralized parallel SGD algorithm that can implement in dynamic networks and take privacy‐preserving into consideration.

Highlights

  • Decentralized machine learning, as a modeling mechanism that allocates training tasks and computes resources to achieve a balance between training speed and accuracy, has demonstrated strong potential in various areas, especially for training large models on large datasets [1,2,3], such as ImageNet [4]

  • The objective f ðxÞ can be rephrased as a linear combination of the local loss function f iðxÞ

  • We show that our proposed D-(DP)2SGD algorpithffiffimffiffiffiffisatisfies ε -DP and achieves the convergence rate of Oð1/ KnÞ when K is large enough

Read more

Summary

Introduction

Decentralized machine learning, as a modeling mechanism that allocates training tasks and computes resources to achieve a balance between training speed and accuracy, has demonstrated strong potential in various areas, especially for training large models on large datasets [1,2,3], such as ImageNet [4]. The objective f ðxÞ can be rephrased as a linear combination of the local loss function f iðxÞ This formulation can be expressed as many popular decentralized learning models including deep learning [5], linear regression [6], and logistic regression [7]. Decentralized parallel stochastic gradient descent (D-PSGD) is one of the fundamental methods in solving large-scale machine learning tasks in static networks [1]. In D-PSGD, all nodes compute the stochastic gradient using their local dataset and exchange the results with their neighbors iteratively. Based on differential privacy, we present a new dynamic decentralized stochastic gradient descent algorithm (D-(DP)2SGD), which offers a strong protection for local datasets of decentralized nodes.

Related Work
System Model and Problem Description
Algorithm
Main Results
Result
Experiments
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call