Abstract

While federated learning has shown strong results in opti- mizing a machine learning model without direct access to the original data, its performance may be hindered by in- termittent client availability which slows down the conver- gence and biases the final learned model. There are significant challenges to achieve both stable and bias-free training un- der arbitrary client availability. To address these challenges, we propose a framework named Federated Graph-based Sam- pling (FEDGS), to stabilize the global model update and mitigate the long-term bias given arbitrary client availabil- ity simultaneously. First, we model the data correlations of clients with a Data-Distribution-Dependency Graph (3DG) that helps keep the sampled clients data apart from each other, which is theoretically shown to improve the approximation to the optimal model update. Second, constrained by the far- distance in data distribution of the sampled clients, we fur- ther minimize the variance of the numbers of times that the clients are sampled, to mitigate long-term bias. To validate the effectiveness of FEDGS, we conduct experiments on three datasets under a comprehensive set of seven client availability modes. Our experimental results confirm FEDGS’s advantage in both enabling a fair client-sampling scheme and improving the model performance under arbitrary client availability. Our code is available at https://github.com/WwZzz/FedGS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.