Abstract

Federated learning is a new communication and computing concept that allows naturally distributed data sets (e.g., as in data acquisition sensors) to be used to train global models, and therefore successfully addresses privacy, power and bandwidth limitations in wireless networks. In this paper, we study the communications problem of latency minimization of a multi-user wireless network used to train a decentralized machine learning model. To facilitate low latency, the wireless stations (WSs) employ non-orthogonal multiple access (NOMA) for simultaneous transmission of local model parameters to the base station, subject to the users’ maximum CPU frequency, maximum transmit power, and maximum available energy. The proposed resource allocation scheme guarantees fair resource sharing among WSs by enforcing only a single WS to spend the maximum allowable energy or transmit at maximum power, whereas the rest of the WSs transmit at lower power and spend less energy. The closed-form analytical solution for the optimal values of resource allocation parameters is used for efficient online implementation of the proposed scheme with low computational complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call