Abstract

Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by local data processing at each client. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions, untrustworthy servers, and external attacks. To address these issues, we propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, aggregates its own model with received ones, and then competes to generate a block before its local training on the next round. We evaluate the learning performance of BLADE-FL, and develop an upper bound on the global loss function. Then we verify that this bound is convex with respect to the number of overall aggregation rounds <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> , and optimize the computing resource allocation for minimizing the upper bound. We also note that there is a critical problem of training deficiency, caused by lazy clients who plagiarize others’ trained models and add artificial noises to disguise their cheating behaviors. Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> , the learning parameters, and the proportion of lazy clients. Based on the MNIST and Fashion-MNIST datasets, we see that the experimental results are consistent with the analytical ones. To be specific, the gap between the developed upper bound and experimental results is lower than <inline-formula><tex-math notation="LaTeX">$5\%$</tex-math></inline-formula> , and the optimized <inline-formula><tex-math notation="LaTeX">$K$</tex-math></inline-formula> based on the upper bound can effectively minimize the loss function.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call