Abstract

A promising distributed learning framework called federated learning (FL) can preserve users' local data privacy. Nevertheless, training machine learning (ML) model is a difficult task for energy-limited wireless devices (WDs). This paper studies the wireless power transfer (WPT) aided FL in which the cellular base station (BS) is responsible for charging the WDs via WPT as well as receiving WDs' locally trained model for model aggregation in each round of FL iteration. Specifically, as the WDs are charged by the BS in sequence, we consider that each WD can adopt the individual number of local iterations to generate the local model with different accuracy. We formulate a joint optimization of the each WD's processing rate, WPT-duration for the BS to charge each WD as well as each WD's number of local iterations, with the goal of minimizing the overall latency of FL iterations until reaching the convergence condition. In spite of its non-convexity, we decompose it into two subproblems and propose a simulated annealing based algorithm to solve them in sequence efficiently. Simulation results are given to show the effectiveness of our proposed algorithm and illustrate the advantages of our proposed scheme in comparison with some baseline schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call