Abstract

Federated learning(FL) development has grown increasingly strong with the increased emphasis on data for individuals and industry. Federated learning allows individual participants to jointly train a global model without sharing local data, which significantly enhances data privacy. However, federated learning is vulnerable to poisoning attacks by malicious participants. Since federated learning does not have access to the participants’ training process, i.e., attackers can compromise the global model by uploading elaborate malicious local updates to the server under the guise of normal participants. Current model poisoning attacks usually add small perturbations to the local model after it is trained to craft harmful local updates and the attacker finds the appropriate perturbation size to bypass robust detection methods and corrupt the global model as much as possible. In contrast, we propose a novel model poisoning attack based on the momentum of history information (MPHM), that is, the attacker makes new malicious updates by dynamically crafting perturbations using the historical information in the local training, which will make the new malicious updates more effective and stealthy. Our attack aims to indiscriminately reduce the testing accuracy of the global model with minimal information. Experiments show that in the classical defense case, our attack can significantly corrupt the accuracy of the global model compared to other advanced poisoning attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call