Abstract
Federated learning (FL) can push the limitation of “Data Island” while protecting data privacy has been a broad concern. However, the centralised FL is vulnerable to a single-point failure. While decentralised and tamper-proof blockchains can cope with the above issues, it is difficult to find a benign benchmark gradient and eliminate the poisoning attack in the later stage of global model aggregation. To address the above problems, we present a global to local based privacy-preserving federated consensus scheme against poisoning attacks (FedG2L). This scheme can effectively reduce the influence of poisoning attacks on model accuracy. In the global aggregation stage, a gradient-similarity-based secure consensus algorithm (SecPBFT) is designed to eliminate malicious gradients. During this procedure, the gradient of the data owner will not be leaked. Then, we propose an improved ACGAN algorithm to generate local data to further update the model without poisoning attacks. Finally, we theoretically prove the security and correctness of our scheme. Experimental results demonstrated that the model accuracy is improved by at least 55% than no defense scheme, and the attack success rate is reduced by more than 60%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.