Abstract

The training process of federated learning is known to be vulnerable to adversarial attacks (e.g., backdoor attack). Previous works showed that differential privacy (DP) can be used to defend against backdoor attacks, yet at the cost of vastly losing model utility. To address this issue, we in this paper propose a defense method based on differential privacy, called Clip Norm Decay (CND), to maintain utility when defending against backdoor attacks with DP. CND reduces the injected noise by decreasing the clipping threshold of model updates through the whole training process. In particular, our algorithm bounds the norm of malicious updates by adaptively setting the appropriate thresholds according to the current model updates. Empirical results show that CND can substantially enhance the accuracy of the main task when defending against backdoor attacks. Moreover, extensive experiments demonstrate that our method performs better defense than the original DP, further reducing the attack success rate, even in a strong assumption of threat model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call