Abstract

Federated learning preserves data privacy by training machine learning models in a distributed fashion, where local models are trained on the client devices and aggregated on the server. Prevalent aggregation algorithms in federated learning perform well in homogeneous settings, but suffer from inadequate convergence in heterogeneous settings due to non-IID data distribution. In this paper, we explore the shortcomings of existing work and recognize that the memory loss of optimizers in aggregation steps limits convergence performance. In response, we propose FedRL, a new adaptive aggregation algorithm with the supervision of a policy-based deep reinforcement learning agent. Using real-world datasets, we evaluate the effectiveness of FedRL by comparing to state-of-the-art adaptive aggregation algorithms in the literature, and show its superiority in accelerating convergence to a target accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call