Abstract

Federated Learning (FL) is an emerging research field that yields a global trained model from different local clients without violating data privacy. Existing FL techniques often ignore the effective distinction between local models and the aggregated global model when doing the client-side weight update, as well as the distinction of local models for the server-side aggregation. In this article, we propose a novel FL approach with resorting to mutual information (MI). Specifically, in client-side, the weight update is reformulated through minimizing the MI between local and aggregated models and employing Negative Correlation Learning (NCL) strategy. In server-side, we select top effective models for aggregation based on the MI between an individual local model and its previous aggregated model. We also theoretically prove the convergence of our algorithm. Experiments conducted on MNIST, CIFAR-10, ImageNet, and the clinical MIMIC-III datasets manifest that our method outperforms the state-of-the-art techniques in terms of both communication and testing performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call