Abstract

In federated learning systems, the participants collaboratively train a joint model without sharing their raw data. However, these systems are susceptible to poisoning attacks, due to the difficulty in supervising local training processes. Most existing model poisoning attacks target all parameters, resulting in significant model modifications that can be easily detected by checking statistical similarity. We therefore propose FedIMP, an innovative untargeted model poisoning attack method that introduces the concept of parameter importance to enhance stealthiness and effectiveness. We first assess the parameter importance using the Fisher information to selectively poison only those with high importance. Furthermore, we formulate an optimization problem to derive the optimal malicious boosting coefficient such that the attack can evade defense mechanisms while enhancing attack's impact. The experimental results validate the effectiveness of FedIMP, demonstrating its ability to deteriorate model performance and slow down convergence across various aggregation algorithms. Our approach highlights the critical vulnerability in federated learning systems and provides insights for developing more robust defense strategies against poisoning attacks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.