Abstract

With new applications made possible by the fusion of edge computing and artificial intelligence (AI) technologies, the global market capitalization of edge AI has risen tremendously in recent years. Deployment of pre-trained deep neural network (DNN) models on edge computing platforms, however, does not alleviate the fundamental trust assurance issue arising from the lack of interpretability of end-to-end DNN solutions. The most notorious threat of DNNs is the backdoor attack. Most backdoor attacks require a relatively large injection rate (≈ 10%) to achieve a high attack success rate. The trigger patterns are not always stealthy and can be easily detected or removed by backdoor detectors. Moreover, these attacks are only tested on DNN models implemented on general-purpose computing platforms. This paper proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. A rich set of composite triggers can be created for different dirty labels. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call