Abstract
Recently, federated learning has received widespread attention, which will promote the implementation of artificial intelligence technology in various fields. Privacy-preserving technologies are applied to users’ local models to protect users’ privacy. Such operations make the server not see the true model parameters of each user, which opens wider door for a malicious user to upload malicious parameters and make the training result converge to an ineffective model. To solve this problem, in this article, we propose a poisoning attack defense framework for horizontal federated learning systems called ADFL. Specifically, we design a proof generation method for users to generate proofs to verify whether it is malicious or not. An aggregation rule is also proposed to make sure the global model has a high accuracy. Several verification experiments were conducted and the results show that our method can detect malicious user effectively and ensure the global model has a high accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.