Abstract

Data heterogeneity is a challenge of federated learning. Traditional federated learning aims to obtain a global model, but a single global model cannot meet the needs of all clients when the clients’ local data are distributed differently. To alleviate this problem, we propose a mutually beneficial collaboration method for personalized federated learning (FedMBC), which provides each client with a personalized model by enhancing collaboration among similar clients. First, we use the task layer outputs and soft outputs of the client model to measure the similarity of the clients. Then, for each client, we adopt a dynamic aggregation method based on the similarity of clients on the server in each communication round to aggregate a model suitable for its local data distribution. That is, the aggregated model is a personalized model of the client. Furthermore, since the data heterogeneity and the different clients selected for each communication round may lead to slow convergence of the aggregated model, we adopt the aggregated model from the previous round in the local update stage of the client to accelerate the convergence of the model. Finally, we compare our method with different federated learning algorithms on various datasets in a variety of settings, and the results show that our method is superior to them in terms of test performance and communication efficiency. In particular, when the distributions of data among clients are diverse, FedMBC can improve the test accuracy by approximately 2.3% and reduce the number of communication rounds required by up to 35% compared with FedAvg on the CIFAR-10 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call