Abstract

Federated learning (FL) is an emerging framework that enables massive clients (e.g., mobile devices or enterprises) to collaboratively construct a global model without sharing their local data. However, due to the lack of direct access to clients’ data, the global model is vulnerable to be attacked by malicious clients with their poisoned data. Many strategies have been proposed to mitigate the threat of label flipping attacks, but they either require considerable computational overhead, or lack robustness, and some even cause privacy concerns. In this paper, we propose Malicious Clients Detection Federated Learning (MCDFL) to defense against the label flipping attack. It can identify malicious clients by recovering a distribution over a latent feature space to detect the data quality of each client. We demonstrate the effectiveness of our proposed strategy on two benchmark datasets, i.e., CIFAR-10 and Fashion-MNIST, by considering different neural network models and different attack scenarios. The results show that, our solution is robust to detect malicious clients without excessive costs under various conditions, where the proportion of malicious clients is in the range of 5% and 40%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call