Abstract

Federated learning (FL) is a machine learning framework, where multiple distributed edge Internet of Things (IoT) devices collaboratively train a model under the orchestration of a central server while keeping the training data distributed on the IoT devices. FL can mitigate the privacy risks and costs from data collection in traditional centralized machine learning. However, the deployment of standard FL is hindered by the expense of the communication of the gradients from the devices to the server. Hence, many gradient compression methods have been proposed to reduce the communication cost. However, the existing methods ignore the structural correlations of the gradients and, therefore, lead to a large compression loss which will decelerate the training convergence. Moreover, many of the existing compression schemes do not enable over-the-air aggregation and, hence, require huge communication resources. In this work, we propose a gradient compression scheme, named FedOComp, which leverages the correlations of the stochastic gradients in FL systems for efficient compression of the high-dimension gradients with over-the-air aggregation. The proposed design can achieve a smaller deceleration of the training convergence compared to other gradient compression methods since the compression kernel exploits the structural correlations of the gradients. It also directly enables over-the-air aggregation to save communication resources. The derived convergence analysis and simulation results further illustrate that under the same power cost, the proposed scheme has a much faster convergence rate and higher test accuracy compared to existing baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.