Abstract
To overcome the challenge of limited bandwidth, client selection has been considered an effective method for optimizing Federated Learning (FL). However, since the volatility of the learning environment, the available clients exhibit some volatility over the training process in terms of client population, client data, training status, and transmitting status, which greatly increases the difficulty of client selection. To find a practical solution, we explore a client selection problem in volatile federated learning (Volatile FL). Specifically, we first derive the convergence analysis for non-convex and strongly convex cases to illustrate the main factors affecting the convergence speed. Then, we introduce the client utility to quantify the client’s contribution to model training and discuss the key problems of client selection in Volatile FL. For an efficient settlement, we propose CU-CS, a Combinatorial Multi-Arm Bandit (C <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> MAB) based decision scheme for the proposed selection problem. Theoretically, we prove that the regret of CU-CS is strictly bounded by a finite constant, justifying its theoretical feasibility. The experimental results demonstrate that our method significantly boosts FL by speeding up model convergence, promoting model accuracy, and reducing energy consumption.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.