Federated learning, a decentralized paradigm, offers the potential to train models across multiple devices while preserving data privacy. However, challenges such as malicious actors and model parameter leakage have raised concerns. To tackle these issues, we introduce a game-theoretic, trustworthy anti-collusion federated learning scheme, which combines game-theoretic techniques and rational trust models with functional encryption and smart contracts for enhanced security. Our empirical evaluations, using datasets like MNIST, CIFAR-10, and Fashion MNIST, underscore the influence of data distribution on performance, with IID setups outshining non-IID ones. The proposed scheme also showcased scalability across diverse client counts, adaptability to various tasks, and heightened security through game theory. A critical observation was the trade-off between privacy measures and optimal model performance. Overall, our findings highlight the scheme’s capability to bolster federated learning’s robustness and security.
Read full abstract