Abstract

This paper studies the problem of developing contrastive learning into the privacy-protected federated learning (FL), which is to achieve more data samples for model training. The existing methods usually encourage the global model and local models in FL to be the same one, often ignoring the data heterogeneity of the clients. In this paper, we proposed a method of personalized federated contrastive learning to improve the FL model performance for each client’s task, by learning a global representation and a local representation simultaneously. Our method is a novel FL framework that borrows the scheme of contrastive learning (CL), where one CL branch is the global model while the other branch is the local model divided into a share part and a personalized part. The proposed model is then trained by maximizing the agreement between the global model and the sharing part of the local model and meanwhile minimizing the agreement between the global model and the personalized part. We conducted evaluations on three public datasets for federated image classification. The results show that the proposed method can benefit from the personalization of local models and thus achieve better accuracy in comparison with the state-of-the-art FL models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call