In recent years, federated learning has been able to provide an effective solution for data privacy protection, so it has been widely used in financial, medical, and other fields. However, traditional federated learning still suffers from single-point server failure, which is a frequent issue from the centralized server for global model aggregation. Additionally, it also lacks an incentive mechanism, which leads to the insufficient contribution of local devices to global model training. In this paper, we propose a blockchain-based decentralized federated learning method, named BD-FL, to solve these problems. BD-FL combines blockchain and edge computing techniques to build a decentralized federated learning system. An incentive mechanism is introduced to motivate local devices to actively participate in federated learning model training. In order to minimize the cost of model training, BD-FL designs a preference-based stable matching algorithm to bind local devices with appropriate edge servers, which can reduce communication overhead. In addition, we propose a reputation-based practical Byzantine fault tolerance (R-PBFT) algorithm to optimize the consensus process of global model training in the blockchain. Experiment results show that BD-FL effectively reduces the model training time by up to 34.9% compared with several baseline federated learning methods. The R-PBFT algorithm can improve the training efficiency of BD-FL by 12.2%.
Read full abstract