Abstract

Federated learning enables multiple clients to collaboratively learn machine learning models in a privacy-preserving manner. However, in real-world scenarios, a key challenge encountered in federated learning is the statistical heterogeneity among clients. Existing work mainly focused on a single global model shared across the clients, making it hard to generalize well to all clients due to the large discrepancy in the data distributions. To address this challenge, we propose pFedLT , a novel approach that can adapt the single global model to different data distributions. Specifically, we propose to perform a pluggable layer-wise transformation during the local update phase based on scaling and shifting operations. In particular, these operations are learned with a meta-learning strategy. By doing so, pFedLT can capture the diversity of data distribution among clients, therefore, can generalize well even when the data distributions among clients exhibit high statistical heterogeneity. We conduct extensive experiments on synthetic and real-world datasets (MNIST, Fashion_MNIST, CIFAR-10, and Office+Caltech10) under different Non-IID settings. Experimental results demonstrate that pFedLT significantly improves the model accuracy by up to 11.67% and reduces the communication costs compared with state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call