Abstract

Federated learning (FL) is a decentralized algorithm that can train a globally shared model without the requirement to send the raw data to a centralized server by user equipments (UEs). Consider the UEs with non-independently and identically distributed (non-IID) data, heterogeneous computational capabilities and wireless channel conditions, FL becomes unproductive over a wireless network. In this paper, we jointly optimize the user scheduling policy and resource allocation to achieve a tradeoff among the fairness of user scheduling, the accuracy of FL, training time, and energy consumption of UEs. The optimization problem is formulated as a Markov Decision Process considering the potential impact of current scheduling on subsequent training and available resources. To solve the problem, a policy network is trained based on an actor-critic deep reinforcement learning framework. Simulation results show that the proposed user scheduling and resource allocation policy reduces the time and energy cost of the training process while improving the freshness of local update and performance on the 20% worst UEs compared with random user selection and resource allocation policy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.