Abstract

Federated semisupervised learning (FSSL) aims to train models with both labeled and unlabeled data in the federated settings, enabling performance improvement and easier deployment in realistic scenarios. However, the nonindependently identical distributed data in clients leads to imbalanced model training due to the unfair learning effects on different classes. As a result, the federated model exhibits inconsistent performance on not only different classes, but also different clients. This article presents a balanced FSSL method with the fairness-aware pseudo-labeling (FAPL) strategy to tackle the fairness issue. Specifically, this strategy globally balances the total number of unlabeled data samples which is capable to participate in model training. Then, the global numerical restrictions are further decomposed into personalized local restrictions for each client to assist the local pseudo-labeling. Consequently, this method derives a more fair federated model for all clients and gains better performance. Experiments on image classification datasets demonstrate the superiority of the proposed method over the state-of-the-art FSSL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call