Abstract

Federated learning (FL) has recently emerged as a popular distributed learning paradigm since it allows collaborative training of a global machine learning model while keeping the training data of its participating workers locally. This paradigm enables the model training to harness the computing power across the network of FL and preserves the privacy of local training data. However, communication efficiency has become one of the major concerns of FL due to frequent model updates through the network, especially for devices in wireless networks that have limited communication resources. Despite that various communication-efficient compression mechanisms (e.g., quantization and sparsification) have been incorporated into FL, most of the existing studies are only concerned with resource allocation optimization given predetermined compression mechanisms, and few of them take wireless communication into consideration in the design of the compression mechanisms. In this paper, we study the impact of sparsification and wireless channels on FL performance. Specifically, we propose a channel-aware sparsification mechanism and derive a closed-form solution for communication time allocation for workers in a TDMA setting. Extensive simulations are conducted to validate the effectiveness of the proposed mechanism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call