Abstract

Federated learning is a newly emerged distributed deep learning paradigm, where the clients separately train their local neural network models with private data and then jointly aggregate a global model at the central server. Mobile edge computing is aimed at deploying mobile applications at the edge of wireless networks. Federated learning in mobile edge computing is a prospective distributed framework to deploy deep learning algorithms in many application scenarios. The bottleneck of federated learning in mobile edge computing is the intensive resources of mobile clients in computation, bandwidth, energy, and data. This article first illustrates the typical use cases of federated learning in mobile edge computing, and then investigates the state-of-the-art resource optimization approaches in federated learning. The resource-effi-cient techniques for federated learning are broadly divided into two classes: the black-box and white-box approaches. For black-box approaches, the techniques of training tricks, client selection, data compensation, and hierarchical aggregation are reviewed. For white-box approaches, the techniques of model compression, knowledge distillation, feature fusion, and asynchronous update are discussed. After that, a neural-structure-aware resource management approach with mod-ule-based federated learning is proposed, where mobile clients are assigned with different subnetworks of the global model according to the status of their local resources. Experiments demonstrate the superiority of our approach in elastic and efficient resource utilization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call