Abstract

Quantization has been increasingly proposed in wireless federated learning (FL) to compress transmission data, which therefore significantly reduces the training latency. In this letter, we further reduce the training latency by introducing dynamic quantization and bandwidth adaptation. Our insights are twofold: (1) without introducing any information loss, the quantization bit length can be dynamically adjusted according to the magnitude of weight updates (i.e., differentials), which vary in different devices and training iterations, and (2) bandwidth allocation can further adapt to the varying sizes of transmission data to reduce the communication latency of straggler devices. We mathematically prove the convergence of dynamic quantization and formalize the optimization problem of bandwidth allocation. The evaluation results demonstrate that while preserving the test accuracy, our techniques can reduce over 50% training latency in both i.i.d. and non-i.i.d. datasets compared to previous work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call