Abstract

Federated Learning (FL) can learn a global model across decentralized data over different clients. However, it is susceptible to statistical heterogeneity of client-specific data. Clients focus on optimizing for their individual target distributions, which would yield divergence of the global model due to inconsistent data distributions. Moreover, federated learning approaches adhere to the scheme of collaboratively learning representations and classifiers, further exacerbating such inconsistency and resulting in imbalanced features and biased classifiers. Hence, in this paper, we propose an independent two-stage personalized FL framework, i.e., Fed-RepPer, to separate representation learning from classification in federated learning. First, the client-side feature representation models are learned using supervised contrastive loss, which enables local objectives consistently, i.e., learning robust representations on distinct data distributions. Local representation models are aggregated into the common global representation model. Then, in the second stage, personalization is studied by learning different classifiers for each client based on the global representation model. The proposed two-stage learning scheme is examined in lightweight edge computing that involves devices with constrained computation resources. Experiments on various datasets (CIFAR-10/100, CINIC-10) and heterogeneous data setups show that Fed-RepPer outperforms alternatives by utilizing flexibility and personalization on non-IID data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call