Abstract

Deep learning has been widely used in many applications such as face recognition, autonomous driving, etc. However, deep learning models are vulnerable to various adversarial attacks, among which backdoor attack is emerging recently. Most of the existing backdoor attacks use the same trigger or the same trigger generation approach to generate the poisoned samples in the training and testing sets, which is also commonly adopted by many backdoor defense strategies. In this paper, we develop an enhanced backdoor attack (EBA) that aims to reveal the potential flaws of existing backdoor defense methods. We use a low-intensity trigger to embed the backdoor, while a high-intensity trigger to activate it. Furthermore, we propose an enhanced coalescence backdoor attack (ECBA) where multiple low-intensity incipient triggers are designed to train the backdoor model, and then, all incipient triggers are gathered on one sample and enhanced to launch the attack. Experiment results on three popular datasets show that our proposed attacks can achieve high attack success rates while maintaining the model classification accuracy of benign samples. Meanwhile, by hiding the incipient poisoned samples and preventing them from activating the backdoor, the proposed attack exhibits significant stealth and the ability to evade mainstream defense methods during the model training phase.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call