Abstract

The coupling of federated learning (FL) and multi-access edge computing (MEC) has the potential to foster numerous applications. However, it poses great challenges to train FL fast enough with limited communication and computing resources of mobile edge devices. Motivated by recent development in ultra fast wireless transmissions and promising advances in artificial intelligence (AI) computing hardware of mobile devices, in this paper, we propose a time efficient FL over future mobile edge devices, called dynamic batch sizes assisted federated learning (DBFL) with convergence guarantee. The DBFL allows batch sizes to increase dynamically during training, which can unleash the computing potential of GPU’s parallelism for on- device training and effectively leverage the fast wireless transmissions (WiFi-6, 5G, 6G, etc.) of mobile edge devices. Furthermore, based on the derived DBFL’s convergence bound, we develop a batch size control scheme to minimize the total time consumption of FL over mobile edge devices, which trade-offs the “talking”, i.e., communication time, and “working”, i.e., computing time, by adjusting the incremental factor appropriately. Extensive simulations are conducted to validate the effectiveness of our proposed DBFL algorithm and demonstrate that our scheme outperforms existing time efficient FL approaches in terms of the total time consumption in various settings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.