Abstract
Federated learning (FL) has been widely applied to collaboratively train deep learning (DL) models on massive end devices (i.e., clients). Due to the limited storage capacity and high labeling cost, the data on each client may be insufficient for model training. Conversely, in cloud datacenters, there exist large-scale unlabeled data, which are easy to collect from public access (e.g., social media). Herein, we propose the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Ada-FedSemi</i> system, which leverages both on-device labeled data and in-cloud unlabeled data to boost the performance of DL models. In each round, local models are aggregated to produce pseudo-labels for the unlabeled data, which are utilized to enhance the global model. Considering that the number of participating clients and the quality of pseudo-labels will have a significant impact on the training performance, we introduce a multi-armed bandit (MAB) based online algorithm to adaptively determine the participating fraction and confidence threshold. Besides, to alleviate the impact of stragglers, we assign local models of different depths for heterogeneous clients. Extensive experiments on benchmark models and datasets show that given the same resource budget, the model trained by Ada-FedSemi achieves 3% <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\sim$</tex-math></inline-formula> 14.8% higher test accuracy than that of the baseline methods. When achieving the same test accuracy, Ada-FedSemi saves up to 48% training cost, compared with the baselines. Under the scenario with heterogeneous clients, the proposed HeteroAda-FedSemi can further speed up the training process by 1.3× <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\sim 1.5\times$</tex-math></inline-formula> .
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.