Federated learning (FL) enables resource-constrained node devices to learn a shared model while keeping the training data local. Since recent research has demonstrated multiple privacy leakage attacks in FL, e.g., gradient inference attacks and membership inference attacks, differential privacy (DP) is applied to serve as one of the most effective privacy protection mechanisms. Despite the benefit DP brings, we observe that the introduction of DP also brings random changes to client updates, which will affect the robust aggregation algorithms. We reveal a novel poisoning attack under the cover of DP, named the DP-Poison attack in FL. Specifically, the DP-Poison attack is designed to achieve four goals: 1) maintaining the main task performance; 2) launching a successful attack; 3) escaping the robust aggregation algorithms in FL; and 4) keeping the effectiveness of DP privacy protection. To achieve these goals, we design multiple optimization goals to generate DP noise through a genetic algorithm. The optimization ensures that while the benign updates change randomly, the malicious updates can change towards the global model after adding the DP noise, so that it is easier to be accepted by the robust aggregation algorithms. Extensive experiments show that DP-Poison achieves a nearly 100% attack success rate while maintaining the proposed four goals.
Read full abstract