Abstract

Federated learning (FL) has gained considerable attention of wireless communications community owing to its nature of decentralized training and privacy-preserving. However, with limited radio resources and increasing number of user equipments (UEs), it is very hard to realize strictly synchronous model updating among all the involved UEs as required in the traditional FL algorithms. In this paper, we propose a novel asynchronous FL framework, which considers the potential failures in uploading the local models and the resultant varying degrees of staleness among the models for global update. Specifically, we first design two working modes for adapting to systems with different communication environments and tasks of different difficulty. Next, a central model fusion algorithm is designed for carefully determining the fusion weight during the global update. On one hand, it aims to make the most of the fresh information contained in the uploaded local models. On the other hand, it avoids the biased convergence by making the impact of each UE be proportional to its sample share. Numerical experiments validate that the proposed asynchronous FL framework can achieve the fast and smooth convergence and enhance the training efficiency significantly.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.