Abstract
Federated learning is a new communication and computing concept that allows naturally distributed data sets (e.g., as in data acquisition sensors) to be used to train global models, and therefore successfully addresses privacy, power and bandwidth limitations in wireless networks. In this paper, we study the communications problem of latency minimization of a multi-user wireless network used to train a decentralized machine learning model. To facilitate low latency, the wireless stations (WSs) employ non-orthogonal multiple access (NOMA) for simultaneous transmission of local model parameters to the base station, subject to the users’ maximum CPU frequency, maximum transmit power, and maximum available energy. The proposed resource allocation scheme guarantees fair resource sharing among WSs by enforcing only a single WS to spend the maximum allowable energy or transmit at maximum power, whereas the rest of the WSs transmit at lower power and spend less energy. The closed-form analytical solution for the optimal values of resource allocation parameters is used for efficient online implementation of the proposed scheme with low computational complexity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.