Abstract

Federated learning (FL) is an efficient distributed machine learning paradigm for the collaborative training of neural network models by many clients with the assistance of a central server. Currently, the main challenging issue is that malicious clients can send poisoned model updates to the central server, making FL vulnerable to model poisoning attacks. In this paper, we propose a new system named DeMAC to improve the detection and defence against model poisoning attacks by malicious clients. The main idea behind the new system is based on an observation that, as malicious clients need to reduce the poisoning task learning loss, there will be obvious increases in the norm of gradients. We define a metric called GradScore to measure this norm of clients. It is shown through experiments that the GradScores of malicious and benign clients are distinguishable in all training stages. Therefore, DeMAC can detect malicious clients by measuring the GradScore of clients. Furthermore, a historical record of the contributed global model updates is utilized to enhance the DeMAC system, which can spontaneously detect malicious behaviours without requiring manual settings. Experiment results over two benchmark datasets show that DeMAC can reduce the attack success rate under various attack strategies. In addition, DeMAC can eliminate model poisoning attacks under heterogeneous environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call