Abstract

The coupling of federated learning (FL) and multi-access edge computing (MEC) has the potential to foster numerous applications. However, it poses great challenges to train FL fast enough with limited communication and computing resources of mobile edge devices. Motivated by recent development in ultra fast wireless transmissions and promising advances in artificial intelligence (AI) computing hardware of mobile devices, in this paper, we propose a time efficient FL over future mobile edge devices, called dynamic batch sizes assisted federated learning (DBFL) with convergence guarantee. The DBFL allows batch sizes to increase dynamically during training, which can unleash the computing potential of GPU’s parallelism for on- device training and effectively leverage the fast wireless transmissions (WiFi-6, 5G, 6G, etc.) of mobile edge devices. Furthermore, based on the derived DBFL’s convergence bound, we develop a batch size control scheme to minimize the total time consumption of FL over mobile edge devices, which trade-offs the “talking”, i.e., communication time, and “working”, i.e., computing time, by adjusting the incremental factor appropriately. Extensive simulations are conducted to validate the effectiveness of our proposed DBFL algorithm and demonstrate that our scheme outperforms existing time efficient FL approaches in terms of the total time consumption in various settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call