Abstract

In federated learning systems, the participants collaboratively train a joint model without sharing their raw data. However, these systems are susceptible to poisoning attacks, due to the difficulty in supervising local training processes. Most existing model poisoning attacks target all parameters, resulting in significant model modifications that can be easily detected by checking statistical similarity. We therefore propose FedIMP, an innovative untargeted model poisoning attack method that introduces the concept of parameter importance to enhance stealthiness and effectiveness. We first assess the parameter importance using the Fisher information to selectively poison only those with high importance. Furthermore, we formulate an optimization problem to derive the optimal malicious boosting coefficient such that the attack can evade defense mechanisms while enhancing attack's impact. The experimental results validate the effectiveness of FedIMP, demonstrating its ability to deteriorate model performance and slow down convergence across various aggregation algorithms. Our approach highlights the critical vulnerability in federated learning systems and provides insights for developing more robust defense strategies against poisoning attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call