Federated learning constitutes a paradigm in distributed machine learning, wherein model training unfolds through the exchange of intermediary results between a central server and federated clients. Given its decentralized nature, conventional machine learning algorithms find limited applicability in the context of federated learning models. Hence, the alternating direction method of multipliers (ADMM), tailored for distributed optimization, is leveraged for this purpose. However, despite the considerable promise of the ADMM algorithm in federated learning, it faces challenges related to computational efficiency, communication efficiency, and data security. In response to these challenges, this study proposes the privacy-preserving and communication-efficient stochastic ADMM (PPCESADMM) algorithm that enhances the computational efficiency through the stochastic optimization method, reduces communication costs through sparse communication method, and ensures the security of federated clients' data via the homomorphic encryption method. Theoretical analyses confirm the convergence of the PPCESADMM algorithm under mild conditions and establish its convergence rate as O(1/T). Experiments illustrate the superior performance of our algorithm in communication cost compared to ADMM and CEADMM algorithms, achieving reductions of 65.10% and 44.32%, respectively. Furthermore, our method surpasses classical federated learning algorithms such as FedAvg, FedAvgM, and SCAFFOLD in terms of algorithmic convergence, achieving superior convergence precision within predefined training epochs. Finally, our algorithm converges to the same results as those obtained without using homomorphic encryption, albeit at the cost of increased computation time.
Read full abstract