Abstract

Federated Learning (FL) allows users to train a global model without sharing original data, enabling data to be available and invisible. However, not all users are benign and malicious users can corrupt the global model by uploading poisonous parameters. Compared with other machine learning schemes, two reasons make it easier for poisoning attacks to succeed in FL: 1) Malicious users can directly poison the parameters, which is more efficient than data poisoning; 2) Privacy preserving techniques, such as homomorphic encryption (HE) or differential privacy (DP), give poisonous parameters a cover, which makes it difficult for the server to detect outliers. To solve such a dilemma, in this paper, we propose VPPFL, a verifiable privacy-preserving federated learning scheme (VPPFL) with DP as the underlying technology. The VPPFL can defend against poisoning attacks and protect users' privacy with small computation and communication cost. Specifically, we design a verification mechanism, which can verify parameters that are perturbed by DP Noise, thus finding out poisonous parameters. In addition, we provide comprehensive analysis from the perspectives of security, convergence and complexity. Extensive experiments show that our scheme maintains the detection capability compared to prior works, but it only needs 15%-30% computation cost and 7%-14% communication cost.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.