Abstract

Since traditional federated learning algorithms cannot provide sufficient privacy guarantees, an increasing number of approaches apply local differential privacy (LDP) techniques to federated learning to provide strict privacy guarantees. However, the privacy budget heavily increases proportionally with the dimension of the parameters, and the large variance generated by the perturbation mechanisms leads to poor performance of the final model. In this paper, we propose a novel privacy-preserving edge federated learning framework based on LDP (PPeFL). Specifically, we present three LDP mechanisms to address the privacy problems in the federated learning process. The proposed filtering and screening with exponential mechanism (FS-EM) filters out the better parameters for global aggregation based on the contribution of weight parameters to the neural network. Thus, we can not only solve the problem of fast growth of privacy budget when applying perturbation mechanism locally, but also greatly reduce the communication costs. In addition, the proposed data perturbation mechanism with stronger privacy (DPM-SP) allows a secondary scrambling of the original data of participants and can provide strong security. Further, a data perturbation mechanism with enhanced-utility (DPM-EU) is proposed in order to reduce the variance introduced by the perturbation. Finally, extensive experiments are performed to illustrate that the PPeFL scheme is practical and efficient, providing stronger privacy protection while ensuring utility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call