Abstract

Federated learning (FL) is an emerging machine learning technique that aggregates model attributes from a large number of distributed devices. Compared with the traditional centralized machine learning, FL uploads only model parameters rather than raw data during the learning process. Although distributed computing can lower down the information that needs to be uploaded, model updates in FL can still experience performance bottleneck, especially when training deep learning models in distributed networks. In this work, we investigate the performance of FL update at mobile edge devices that are connected to the parameter server (PS) with wireless links. Considering the spectrum limitation on the wireless fading channels, we further exploit non-orthogonal multiple access (NOMA) together with adaptive gradient quantization and sparsification to facilitate efficient uplink FL updates. Simulation results show that the proposed scheme can significantly reduce FL aggregation latency but still achieve a comparable accuracy with benchmark schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call