Abstract

Federated learning enables distributed training of deep learning models among user equipment (UE) to obtain a high-quality global model. A centralized server aggregates the updates submitted by UEs without knowledge of the local training data or process. Despite its privacy preserving merit, we reveal a severe security concern. Malicious UEs can manipulate their training data by injecting a back-door trigger. Thus, the global model that aggregates those malicious updates may make false predictions on the samples with the backdoor trigger. However, the effect of a single backdoor trigger will quickly be diluted by subsequent benign updates. In this work, we present an effective coordinated backdoor attack against federated learning using multiple local triggers; the global trigger consists of various separate local triggers. Moreover, in contrast to using random triggers, we propose using model-dependent triggers (i.e., generated based on local models of attackers) to conduct backdoor attacks. We conduct extensive experiments to assess the effectiveness of our proposed backdoor attacks on MNIST and CIFAR-10 datasets. Experimental results show that our proposed methodology outperforms both coordinated attacks using random triggers and single trigger backdoor attacks in terms of attack success rate. We also show that Byzantine-resilient aggregation methodologies are not robust to our proposed attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call