Abstract

With the development of AI technology, there is an increasing awareness and concern about data privacy, and training data is becoming more and more fragmented. To make better use of this distributed data while ensuring privacy, federated learning is proposed, which can federate multiple distributed devices to train a machine learning model and does not exchange the raw training data. A key challenge in federated learning is that the distribution of data across clients is heterogeneous, which leads to client drift, that is, local model updates toward local optimal solutions, resulting in performance degradation of the global model. To address this problem, many federated learning algorithms have been proposed. In this paper, we provide a review of existing federated learning optimization strategies. In our opinion, the existing optimization strategies for client drift can be roughly classified into the following two categories: model aggregation and model training. We introduce these two categories separately. Finally, we present our perspectives on the future development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call