Abstract

Federated learning is a type of distributed machine learning that focuses on solutions for the properties of training data on edge devices as well as the privacy of the training set. Federated learning is a discipline highly relevant to real-world applications, and thus the emphasis on different perspectives requires an adaptation of the federated learning framework. Although almost every newly proposed federated learning algorithm is compared with some existing algorithms, current research on testing comparisons between commonly used federated learning algorithms remains vague and complex. Therefore, the purpose of this paper is to test and compare several federated learning frameworks, including the representative FedAvg, MOON, FedProx, and MOON. Based on revisiting the theory and key steps of these algorithms, an analysis of the performance performance will be conducted, evaluating their advantages for applications. Furthermore, a summary based on the test results will be provided, pointing out possible future challenges as well as research directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call