Abstract

Traditional backdoor attacks insert a trigger patch in the training images and associate the trigger with the targeted class label. Backdoor attacks are one of the rapidly evolving types of attack which can have a significant impact. On the other hand, adversarial perturbations have a significantly different attack mechanism from the traditional backdoor corruptions, where an imperceptible noise is learned to fool the deep learning models. In this research, we amalgamate these two concepts and propose a novel imperceptible backdoor attack, termed as the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">IBAttack</i> where the adversarial images are associated with the desired target classes. A significant advantage of the adversarial-based proposed backdoor attack is the imperceptibility as compared to the traditional trigger-based mechanism. The proposed adversarial dynamic attack, in contrast to existing attacks, is agnostic to classifiers and trigger patterns. The extensive evaluation using multiple databases and networks illustrates the effectiveness of the proposed attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call