Abstract
Backdoor (Trojan) attacks are emerging threats against deep neural networks (DNN). A DNN being attacked will predict to an attacker-desired target class whenever a test sample from any source class is embedded with a backdoor pattern; while correctly classifying clean (attack-free) test samples. Existing backdoor defenses have shown success in detecting whether a DNN is attacked and in reverse-engineering the backdoor pattern in a "post-training" regime: the defender has access to the DNN to be inspected and a small, clean dataset collected independently, but has no access to the (possibly poisoned) training set of the DNN. However, these defenses neither catch culprits in the act of triggering the backdoor mapping, nor mitigate the backdoor attack at test-time. In this paper, we propose an "in-flight" defense against backdoor attacks on image classification that 1) detects use of a backdoor trigger at test-time; and 2) infers the class of origin (source class) for a detected trigger example. The effectiveness of our defense is demonstrated experimentally against different strong backdoor attacks.
Full Text
Topics from this Paper
Backdoor Attacks
Deep Neural Networks
Source Class
Test Sample
Image Classification
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
IEEE Transactions on Neural Networks and Learning Systems
Mar 1, 2022
Mar 22, 2023
Chinese Journal of Electronics
Mar 1, 2022
Jun 6, 2021
May 1, 2022
Entropy (Basel, Switzerland)
Jun 25, 2023
Applied Intelligence
Apr 12, 2023
Dec 6, 2021
IEEE Transactions on Dependable and Secure Computing
May 1, 2022
Oct 1, 2021
Computers & Security
Jul 1, 2022