Abstract

In federated learning (FL), multiple clients use local datasets to train models and submit local gradients to the server for aggregation. However, malicious clients may compromise the performance of the model by submitting poisonous gradients. Moreover, the clients do not want to reveal their training models in most application scenarios since their private data may be inferred from them. In addition, most of FL protocols lack an incentive mechanism to supervise participants and cannot punish malicious participants, which is unfair to honest participants. To tackle these problems, we propose a blockchain-based privacy-preserving federated learning against poisoning attack (BPFL). In BPFL, a blockchain-based incentive mechanism is constructed to supervise participants and promptly track malicious behaviors. BPFL can also protect the privacy of aggregated and local model if some participants are malicious, and detect poisonous data by computing cosine similarity between the aggregated gradient and local gradient of the client by using Paillier cryptosystem with threshold decryption. The experiments show that BPFL improves the accuracy of the model from 10% to 75% on CIFAR-10 against poisoning attacks, and therefore BPFL can effectively resist poisoning attacks based on the privacy of local and aggregated models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call