Abstract


 
 
 Federated learning is conceived as a privacy- preserving framework that trains deep neural networks from decentralized data. However, its decentralized nature exposes new attack surfaces. The privacy guarantees of federated learning prevent us from inspecting local data and training pipelines. These restrictions rule out many common defenses against poisoning attacks, such as data sanitization and traditional anomaly detection methods. The most devastating attacks are usually the ones that corrupt the model without altering the performance of the main task. Backdoor attacks are prominent examples of adversarial attacks that often go unnoticed in the absence of sophisticated defenses. This paper sheds light on backdoor attacks in federated learning, where we aim to manipulate the global model to misclassify the samples belonging to a particular task while also maintaining high accuracy on the main objective. Unlike existing works, we adopted a novel approach that directly manipulates the gradients’ momentums to introduce the backdoor. Specifically, the double momentum backdoor attack computes two momentums separately based on malicious and original inputs and uses them to update the model. Via experimental evaluation, we demonstrate that our attack scenario is capable of introducing the backdoor while successfully evading detection.
 
 

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call