Abstract

The conventional federated learning (FL) framework usually assumes synchronous reception and fusion of all the local models at the central aggregator and synchronous updating and training of the global model at all the agents as well. However, in a wireless network, due to limited radio resource, inevitable transmission failures and heterogeneous computing capacity, it is very hard to realize strict synchronization among all the involved user equipments (UEs). In this paper, we propose a novel asynchronous FL framework, which well adapts to the heterogeneity of users, communication environments and learning tasks, by considering both the possible delays in training and uploading the local models and the resultant staleness among the received models that has heavy impact on the global model fusion. A novel centralized fusion algorithm is designed to determine the fusion weight during the global update, which aims to make full use of the fresh information contained in the uploaded local models while avoiding the biased convergence by enforcing the impact of each UE’s local dataset to be proportional to its sample share. Numerical experiments validate that the proposed asynchronous FL framework can achieve fast and smooth convergence and enhance the training efficiency significantly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call