Abstract

Federated learning has been recognized as a promising scheme to tackle the privacy issues in multi-access edge computing through periodically uploading machine learning (ML) model updates instead of the original user data to the edge server. However, there still remains privacy leakage in such federated edge learning (FEL) systems since the model updates accessed by the server can be utilized to recover the original data. In this paper, we consider a personalized differential privacy based FEL scheme to alleviate the privacy leakage by adding different noise perturbations to the model updates of each edge device. Note that the noise perturbations may degrade the ML model performance, which is depicted by the global loss function. It is thus necessary to achieve a loss-privacy tradeoff in FEL by determining the noise scales and the numbers of local model updates. To address this challenge, we first derive the convergence upper bound of the global loss function as well as the closed-form privacy leakage from an adversarial perspective. We then propose a distributed mechanism in which the choices of noise scales and numbers of local model updates are optimized, even when the server is unaware of the personalized privacy preferences of different edge devices. Extensive theoretical analysis and numerical evaluations demonstrate the effectiveness of our proposed method in terms of privacy preservation and the global loss of the learned model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call