Abstract

In the realm of personalized federated learning, some current methods substitute shared parameters with shared samples created by Generative Adversarial Networks (GANs). This enables each client to independently design the architecture of their neural network model. However, this approach still fails to overcome the restriction of enforcing uniform labels on client models that partake in training. To tackle this problem, we suggest the Federated Pseudo-Sample Clustering Algorithm (LPFL-GD). This method allows clients to train cooperatively under personalized labeling conditions. The approach uses the local model as a discriminator and forms a GAN network with the generator to produce a sample set. This set is then uploaded to the central server. The uploaded shared samples are clustered and divided into several clusters by introducing the DBSCAN algorithm on the central server. When filtering client samples, we obtain the labels of shared samples from each client in a cluster and correct the label of the entire cluster. We then merge the corrected samples with the local dataset to extend it. Our approach improves model performance, even when different clients label the same type of data differently. Compared with their performance before participating in federated learning, our approach can improve client model accuracy by up to 13.4%. We replicated other methods in the same environment and found that the local model accuracy of those methods not only improved very little but even decreased by up to 34.5%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call