Abstract

We consider federated learning (FL) with multiple wireless edge servers having their own local coverage. We focus on speeding up training in this increasingly practical setup. Our key idea is to utilize the clients located in the overlapping coverage areas among adjacent edge servers (ESs); in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data. These clients send their updated model to multiple ESs by broadcasting, which acts as bridges for sharing the trained models between servers. Even when some ESs are given biased datasets within their coverage regions, their training processes can be assisted by adjacent servers through the clients in their overlapping regions. As a result, the proposed scheme does not require costly communications with the central cloud server (located at the higher tier of edge servers) for model synchronization, significantly reducing the overall training time compared to the conventional cloud-based FL systems. Extensive experimental results show remarkable performance gains of our scheme compared to existing methods. Our design targets latency-sensitive applications where edge-based FL is essential, e.g., when a number of connected cars/drones must cooperate (via FL) to quickly adapt to dynamically changing environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call