Abstract

Federated Learning (FL) provides enhanced privacy over traditional centralized learning; unfortunately, it is also as susceptible to backdoor attacks, just like its centralized counterpart. Conventionally, in data poisoning-based backdoor attacks, all the malicious participants overlay the same single trigger pattern on a subset of their private data during local training. The same trigger is used to induce the backdoor in the otherwise benign global model at inference time. Such single trigger attacks can be detected and removed with relative ease as they undermine the distributed nature of FL. In this work, we focus on building an attack scheme where each batch of malicious clients uses sizably discrete local triggers during local training, with the ability to invoke the attack with a single small inference trigger during the global model testing. The larger size of the trigger pattern ensures prolonged attack longevity even after the termination of the attack. We conduct extensive experiments to show that our approach is far faster, stealthier, and more effective than the centralized trigger approach. The stealthiness of our work is explained using the DeepLIFT visual feature interpretation method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call