Abstract
With the increasing global attention to data privacy and security issues, how to effectively utilize distributed data while protecting personal privacy has become an important research topic. Federated learning (FL) aims to address data silos and privacy issues by enabling multiple devices or servers to train shared models in collaboration without submitting raw data to a central server. Although a variety of federated learning algorithms have been proposed, there is still a gap in the research on their performance differences under the same model. The goal of this study is to compare and analyze the performance differences of different federated learning algorithms on the same model through experiments. Based on the Fashion-MNIST dataset, this paper compares four commonly used federated learning algorithms in detail: Federal Averaging (FedAvg), Federated Stochastic Gradient Descent (FedSGD), Stochastic Controlled Averaging for Federated Learning (SCAFFOLD), and Federated Proximal (FedProx). The experimental results show that FedProx performs best in all evaluation indicators, followed by SCAFFOLD and FedAvg, while FedSGD performs the worst. These insights into algorithm performance with non-IID data inform practical application suitability and guide future research.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.