Abstract

Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU resources among learning services at each mobile device for the local training process and allocating communication resources among mobile devices for exchanging learning information must be considered. Furthermore, the convergence performance of different learning services depends on the hyper-learning rate parameter that needs to be precisely decided. Towards this end, we propose a joint resource optimization and hyper-learning rate control problem, namely <inline-formula><tex-math notation="LaTeX">${{\sf MS-FEDL}}$</tex-math></inline-formula> , regarding the energy consumption of mobile devices and overall learning time. We design a centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the <inline-formula><tex-math notation="LaTeX">${{\sf MS-FEDL}}$</tex-math></inline-formula> problem. Different from the centralized approach, the decentralized approach requires many iterations to obtain but it allows each learning service to independently manage the local resource and learning process without revealing the learning service information. Our simulation results demonstrate the convergence performance of our proposed algorithms and the superior performance of our proposed algorithms compared to the heuristic strategy.

Highlights

  • Nowadays, following the great success of Machine Learning (ML) and Artificial Intelligence (AI) applications, there are more and more intelligent services that have transformed our lives

  • We study the under-explored problem - the shared computation, communication resource allocation, and the learning parameter control for multiple federated learning services coexisting at the edge networks

  • To capture the trade-off between the energy consumption of mobile devices and overall learning time, we propose a resource optimization problem, namely MS-Federated Learning Algorithm (FEDL) that decides the optimal CPU frequency for each learning service and the fraction of total uplink bandwidth for each user equipments (UEs)

Read more

Summary

INTRODUCTION

Nowadays, following the great success of Machine Learning (ML) and Artificial Intelligence (AI) applications, there are more and more intelligent services that have transformed our lives. When compared to cloud datacenter, the machine learning training process can be done at the mobile edge network with the help of multi-access edge computing (MEC) servers, resulting in lower communication latency for exchanging learning information. These enablers unlock the full potential of edge ML applications for the vision of truly intelligent next-generation communication systems in 6G [1]. Resource allocation process: In this work, we consider the flexible CPU sharing model such as the CPU frequency sharing among virtual machines or containers to perform the local learning updates Since those virtual instances often require a high deployment cost, we consider the pre-allocating CPU strategy for different services.

RELATED WORKS
5: Aggregation and Feedbacks
Multi-Service Sharing Model
Problem formulation
Centralized Approach
Decentralized Approach
5: Primal update
Numerical Settings
Numerical Results
20 I4t0eration60
Findings
PRIVACY DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call