Abstract

Federated learning-enabled edge devices train global models by sharing them while avoiding local data sharing. In federated learning, the sharing of models through communication between several clients and central servers results in various problems such as a high latency and network congestion. Moreover, battery consumption problems caused by local training procedures may impact power-hungry clients. To tackle these issues, federated edge learning (FEEL) applies the network edge technologies of mobile edge computing. In this paper, we propose a novel control algorithm for high-performance and stabilized queue in FEEL system. We consider that the FEEL environment includes the clients transmit data to associated federated edges; these edges then locally update the global model, which is downloaded from the central server via a backhaul. Obtaining greater quantities of local data from the clients facilitates more accurate global model construction; however, this may be harmful in terms of queue stability in the edge, owing to substantial data arrivals from the clients. Therefore, the proposed algorithm varies the number of clients selected for transmission, with the aim of maximizing the time-averaged federated learning accuracy subject to queue stability. Based on this number of clients, the federated edge selects the clients to transmit on the basis of resource status.

Highlights

  • Deep neural networks have demonstrated strong performances in several machine learning tasks, including speech recognition, object detection, and natural language processing

  • To utilize the heterogeneous resources of clients, our proposed method is designed such that the federated edge selects clients according to their resources, instead of the random selection procedures used by traditional federated learning (FL) methods

  • The battery power is inversely proportional to the weight because it is more effective to send the data before the client runs out of battery, we increase the accuracy of the learning model

Read more

Summary

Introduction

Deep neural networks have demonstrated strong performances in several machine learning tasks, including speech recognition, object detection, and natural language processing. Using large quantities of training data and complex neural network architectures make it possible to generate high-quality models, which has pushed these systems into applications requiring more computing resources as well as larger and richer datasets. Electronics 2020, 9, 1359 as the data gathered by the central data center, but this rich data is privacy sensitive which may preclude integrating to the central data center Split learning [6,7] is a novel technique for training deep neural networks across multiple data sources; it avoids the sharing of raw data by splitting the sequences of model layers between the client side (data sources) and the server side. FEEL could alleviate the high communication costs by hierarchical architecture

Motivation
Federated Learning Edge
Clients of the Federated Learning Edge Platform
Queue-Equipped Federated Edge
Client Selection of Federated Edge
Proposed Algorithm
Client Number Control by Lyapunov Optimization
Client Selection
Security and Privacy Discussions in FL
Performance Evaluation
Experiment Setting
Experimental Results
Concluding Remarks and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call