Abstract

In this paper, a communication-efficient federated learning (FL) framework is proposed, which leverages ideas from vector quantized compressed sensing, for the first time, to compress the local model updates at wireless devices in FL. For the compression, each local model update is projected onto a lower dimensional space; then, the projected local model update is quantized by using a vector quantizer. The global model update at a parameter server is reconstructed by using a sparse signal recovery algorithm on the aggregation of the compressed local model updates. A key feature of our compression strategy is that the local model update after the projection is effectively modeled as a Gaussian random vector by the central limit theorem. Inspired by this feature, the optimal vector quantizer is derived for minimizing the compression error of the local model update. Simulation results on the MNIST dataset demonstrate that the proposed framework that uses 0.5 bit to represent each local model update entry shows less than a 1% decrease in classification accuracy compared to FL without local update compression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call