Abstract

In federated learning (FL), model weights must be updated at local users and the base station (BS). These weights are subjected to uplink (UL) and downlink (DL) transmission errors due to the limited reliability of wireless channels. In this paper, we investigate the impact of imperfections in both UL and DL links. First, for a multi-user massive multi-input-multi-output (mMIMO) 6G network, employing zero-forcing (ZF) and minimum mean-squared-error (MMSE) schemes, we analyze the estimation errors of weights for each round. A tighter convergence bound on the modelling error for the communication efficient FL algorithm is derived of the order of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal{O}\left(T^{-1}\sigma_{z}^{2}\right)$</tex-math></inline-formula> , where… <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\sigma_z^{2}$</tex-math></inline-formula> denotes the variance of overall communication error including the quantization noise. The analysis shows that the reliability of DL links is more critical than that of UL links; and the transmit power can be varied in training process to reduce energy consumption. We also vary the number of local training steps, average codeword length after quantization and scheduling policy to improve the communication efficiency. Simulations with image classification problems on MNIST, EMNIST and FMNIST datasets verify the derived bound and are useful to infer the minimum SNR required for successful convergence of the FL algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call