Abstract
Federated learning is a distributed machine learning framework that enables a large number of devices to cooperatively train a model without data sharing. However, because federated learning trains a model using non-independent and identically distributed (non-IID) data stored at local devices, the weight divergence causes a performance loss. This paper focuses on solving the non-IID problems and proposes Kalman filter-based clustering federated learning method called K-FL to get performance gain by providing a specific model with low variance to the device. To the best of our knowledge, it is the first clustering federated learning method that can train a model requiring fewer communication rounds under the premise that non-IID environment without any prior knowledge and an initial value set by the user. From simulations, we demonstrate that the proposed K-FL can train a model much faster, requiring fewer communication rounds than FedAvg and LG-FedAvg when testing neural networks using the MNIST, FMNIST, and CIFAR-10 datasets. As a numerical result, it is shown that the accuracy is improved in all datasets while the computational time cost is reduced by 1.43×, 1.67×, and 1.63× compared to FedAvg, respectively.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have