Abstract

Federated Learning (FL) is a distributed machine learning technique designed to utilize the distributed datasets collected by our mobile and internet-of-things devices. As such, it is natural to consider wireless communication for FL. In wireless networks, Over-the-Air Computation (AirComp) can accelerate FL training by harnessing the interference of uplink gradient transmissions. However, since AirComp utilizes analog transmissions, it introduces an inevitable estimation error due to channel fading and noise. In this paper, we propose retransmissions as a method to reduce such estimation errors and thereby improve the FL classification accuracy. First, we derive the optimal power control scheme with retransmissions. Then we investigate the performance of FL with retransmissions analytically and find an upper bound on the FL loss function. The analysis indicates that our proposed retransmission scheme improves both the final classification accuracy after convergence and the convergence speed per communication round. Experimental results demonstrate that the introduction of retransmissions can give higher classification accuracy than one-shot uplink transmissions, without incurring extra communication costs or latency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call