Abstract
Federated semi-supervised learning (FSSL) involves training a model in a federated environment using a few labeled samples and many unlabeled samples. Compared with semi-supervised learning, FSSL faces more complex data situations, especially when data are non-independently and identically distributed (non-IID), adding more challenges to the learning process. The previous method addresses the aforementioned issues by enlarging the training sample space through multiple client random sampling and reweighting the parameters. Although it achieves high accuracy, it sacrifices communication efficiency. In this study, we propose PDCFed, a two-stage sampling method that uses the Predicted Distribution Changes of samples after different data augmentations. We evaluate the credibility of the samples based on the maximum probability predicted by weak augmentation. When the samples are in a less reliable space, they are further sampled after adjusting the predicted distribution changes using a Gaussian function. To enhance the model’s generalization ability, an entropy penalty term is incorporated after unsupervised training loss. Extensive experiments demonstrate that this method outperforms existing methods on three datasets with non-IID data and significantly improves communication efficiency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.