Abstract

Federated learning (FL) enables resource-constrained node devices to learn a shared model while keeping the training data local. Since recent research has demonstrated multiple privacy leakage attacks in FL, e.g., gradient inference attacks and membership inference attacks, differential privacy (DP) is applied to serve as one of the most effective privacy protection mechanisms. Despite the benefit DP brings, we observe that the introduction of DP also brings random changes to client updates, which will affect the robust aggregation algorithms. We reveal a novel poisoning attack under the cover of DP, named the DP-Poison attack in FL. Specifically, the DP-Poison attack is designed to achieve four goals: 1) maintaining the main task performance; 2) launching a successful attack; 3) escaping the robust aggregation algorithms in FL; and 4) keeping the effectiveness of DP privacy protection. To achieve these goals, we design multiple optimization goals to generate DP noise through a genetic algorithm. The optimization ensures that while the benign updates change randomly, the malicious updates can change towards the global model after adding the DP noise, so that it is easier to be accepted by the robust aggregation algorithms. Extensive experiments show that DP-Poison achieves a nearly 100% attack success rate while maintaining the proposed four goals.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.