Abstract

Federated learning (FL) has recently become one of the hottest focuses in network edge intelligence. In the FL framework, user equipments (UEs) train local machine learning (ML) models and transmit the trained models to an aggregator where a global model is formed and then sent back to UEs, such that FL can enable collaborative model training. In large-scale and dynamic edge networks, both local model training and transmission may not be always successful due to constrained power and computing resources at mobile devices, wireless channel impairments, bandwidth limitations, etc., which directly degrades FL performance in terms of model accuracy and/or training time. On the other hand, we need to quantify the benefits and cost of deploying edge intelligence when we plan to improve network performance by using artificial intelligence (AI) techniques which definitely incur certain cost. Therefore, it is imperative to deeply understand the relationship between the required multiple-dimensional resources and FL performance to facilitate FL enabled edge intelligence. In this paper, we construct an analytical model for investigating the relationship between the accuracy of ML model and consumed network resources in FL enabled edge networks. Based on the analytical model, we can explicitly quantify the trained model accuracy given spatial-temporal domain distribution, available user computing and communication resources. Numerical results validate the effectiveness of our theoretical modeling and analysis. Our analytical model in this paper provides some useful guidelines for appropriately promoting FL enabled edge network intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call