Abstract

Recently, federated learning has received widespread attention, which will promote the implementation of artificial intelligence technology in various fields. Privacy-preserving technologies are applied to users’ local models to protect users’ privacy. Such operations make the server not see the true model parameters of each user, which opens wider door for a malicious user to upload malicious parameters and make the training result converge to an ineffective model. To solve this problem, in this article, we propose a poisoning attack defense framework for horizontal federated learning systems called ADFL. Specifically, we design a proof generation method for users to generate proofs to verify whether it is malicious or not. An aggregation rule is also proposed to make sure the global model has a high accuracy. Several verification experiments were conducted and the results show that our method can detect malicious user effectively and ensure the global model has a high accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call