Abstract

The existing asynchronous federated learning methods have effectively addressed the issue of low training efficiency in synchronous methods. However, due to the centralized trust model constraints, they often need to pay more attention to the incentives for participating parties. Additionally, handling low-quality model providers is relatively uniform, leading to poor distributed training results. This paper introduces a blockchain-based asynchronous federated learning protection framework (BCAFL). It introduces model validation and incentive mechanisms to encourage party contributions. Moreover, BCAFL tailors matching contribution cumulative strategies for participants in different states to optimally utilize their resource advantages. In order to address the challenge of malicious party poisoning attacks, a multi-party verification dynamic aggregation factor and filter mechanism are introduced to enhance the global model’s reliability. Through simulation verification, it is proven that BCAFL ensures the reliability and efficiency of asynchronous collaborative learning and enhances the model’s attack resistance capabilities. With training on the MNIST handwritten dataset, BCAFL achieved an accuracy of approximately 90% in 20 rounds. Compared to the existing advanced methods, BCAFL reduces the accuracy loss by 20% when subjected to data poisoning attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call