With the increasing global attention to data privacy and security issues, how to effectively utilize distributed data while protecting personal privacy has become an important research topic. Federated learning (FL) aims to address data silos and privacy issues by enabling multiple devices or servers to train shared models in collaboration without submitting raw data to a central server. Although a variety of federated learning algorithms have been proposed, there is still a gap in the research on their performance differences under the same model. The goal of this study is to compare and analyze the performance differences of different federated learning algorithms on the same model through experiments. Based on the Fashion-MNIST dataset, this paper compares four commonly used federated learning algorithms in detail: Federal Averaging (FedAvg), Federated Stochastic Gradient Descent (FedSGD), Stochastic Controlled Averaging for Federated Learning (SCAFFOLD), and Federated Proximal (FedProx). The experimental results show that FedProx performs best in all evaluation indicators, followed by SCAFFOLD and FedAvg, while FedSGD performs the worst. These insights into algorithm performance with non-IID data inform practical application suitability and guide future research.