Federated Learning (FL) has recently attracted considerable attention in internet of things, due to its capability of enabling mobile clients to collaboratively learn a global prediction model without sharing their privacy-sensitive data to the server. Despite its great potential, a main challenge of FL is that the training data are usually non-Independent, Identically Distributed (non-IID) on the clients, which may bring the biases in the model training and cause possible accuracy degradation. To address this issue, this paper aims to propose a novel FL algorithm to alleviate the accuracy degradation caused by non-IID data at clients. Firstly, we observe that the clients with different degrees of non-IID data present heterogeneous weight divergence with the clients owning IID data. Inspired by this, we utilize weight divergence to recognize the non-IID degrees of clients. Then, we propose an efficient FL algorithm, named CSFedAvg, in which the clients with lower degree of non-IID data will be chosen to train the models with higher frequency. Finally, we conduct simulations using publicly-available datasets to train deep neural networks. Simulation results show that the proposed FL algorithm improves the training performance compared with existing FL protocol.
Read full abstract