Abstract

Federated Learning is a distributed machine learning method that offers inherent advantages in efficient learning and privacy protection within edge computing scenarios. However, terminal nodes often encounter challenges such as insufficient datasets and a significant amount of unlabelled data, leading to reduced accuracy in multi-party collaborative training models. Prior approaches have typically relied on a single pseudo label from unlabelled data to guide model training, limiting the utilization of knowledge within these data. To address this, this paper proposes a federated semi-supervised learning method (FedTG) tailored for image classification. Specifically, we leverage multiple high probability pseudo labels from unlabelled data to participate in semi-supervised learning, rather than relying on a single pseudo label. This approach mitigates the potential harm caused by errors in a single pseudo label and enables the model to fully capture the knowledge within the unlabelled data. Additionally, recognizing the significance of model classifiers (final neural network layer) in image classification tasks, we propose the exclusion of model classifier updates during the training process using unlabelled data to maintain optimal classification performance. Experiments conducted on real datasets have demonstrated that the FedTG method effectively enhances the accuracy of traditional Federated Learning model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call