Abstract

Performing federated learning continuously in edge networks while training data are dynamically and unpredictably streamed to the devices faces critical challenges, including the global model convergence, the long-term resource budget, and the uncertain stochastic network and execution environment. We formulate an integer program to capture all these challenges, which minimizes the cumulative total latency of stream learning on device and federated learning between devices and the edge server. We then decouple the problem, design an online learning algorithm for controlling the number of local model updates via a convex-concave reformulation and rectified gradient-descent steps, and design a bandit learning algorithm for selecting the edge server for global model aggregations by incorporating the budget information to strike the exploit-explore balance. We rigorously prove the sub-linear regret regarding the optimization objective and the sub-linear constraint violation regarding the maximal on-device load, while guaranteeing the convergence of the global model trained. Extensive evaluations with real-world training data and input traces confirm the empirical superiority of our approach over multiple state-of-the-art algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.