Abstract

AbstractAlong with the fast growth of Internet of things (IoT) as well as artificial intelligence (AI), the edge AI, a system that makes use of locally produced data to train a machine learning (ML) model at the wireless edge network, has boosted much attentions from academic and communities communities [1,2,3]. Specifically, federated learning (FL) has been designed to enable an edge server to synchronize enormous IoT devices for collaboratively learning a communal ML model without having access to the original data generated by each IoT device [4]. Nevertheless, FL encounters a number of vital difficulties, including different local datasets (data heterogeneity) and different computing capabilities (device heterogeneity) of mobile devices [5]. Additionally, the training process of FL is limited by the training time as well as the constrained communication resources for supporting a great deal of IoT devices. Herein, the famous FedAvg scheme [4] with local stochastic gradient descent (local SGD) as well as partial participation of devices is frequently utilized to decrease the training time and cost [6]. Moreover, multiple improved FL schemes have been designed to decrease the inter-device variance triggered by data heterogeneity [7, 8] and device heterogeneity [5, 9].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call