Abstract

A fundamental issue for federated learning (FL) is how to achieve efficient training performance under complex dynamic communication environments. This issue can be alleviated by the fact that modern edge devices usually can connect to the edge server via multiple communication channels (e.g., 4G, LTE, and 5G) because multi-channel communication can increase the communication bandwidth and has lower communication costs and energy consumption than a single high-speed communication channel. However, if the communication data cannot be properly allocated to multiple channels in a complex dynamic communication network, multi-channel communication will still waste resources (e.g., bandwidth, battery life, and monetary cost). In this paper, we propose an efficient FL framework called, which consists of two parts, the layered gradient compression (LGC), and a learning-driven control algorithm. Specifically, with LGC, local gradients from a device are coded into several layers, and each layer is sent to the server along a different channel. The FL server aggregates the received layers of local gradients from devices to update the global model and sends the result back to the devices. Furthermore, we prove the convergence of LGC and formally define the problem of resource-efficient with LGC. We then propose a learning-driven algorithm for each device to dynamically adjust its local computation (i.e., the number of local stochastic descent) and communication decisions (i.e., the compression level of different layers and the layer-to-channel mapping) in each iteration. Results from extensive experiments show that significantly reduces the training time and improves the resource utilization (energy consumption and money cost) while achieving a similar test accuracy compared with well-known FL baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call