Abstract

Recent studies have shown that the performance of deep neural network is extremely susceptible to adversarial examples. Slightly perturbation of the input that are not perceptible to the model may generate incorrect prediction results. Most existing attack methods with gradient or optimization operation cannot generate adversarial examples immediately, the robustness and imperceptibility of the adversarial examples are also not very good. To address this limitation, we propose a novel generative adversarial attack method with low-frequency information called LIGAA, which can achieve end-to-end real-time generation of adversarial examples. It mainly consists of a low-frequency information extractors and two symmetric decoders including a noise decoder and a saliency map decoder. The low-frequency information extractor eliminates the “useless” features in the original input. The noise decoder is used to generate perturbation noise for the entire image region that can be misclassified. The saliency map decoder limits the added noise to the specific areas that have a strong impact on classification, which can effectively enhance the imperceptibility of adversarial examples. Experimental results on CIFAR-10, Imagenette and CIFAR-100 of MobileNetv2 illustrate that the proposed LIGAA has obtained an attack success rate of 86.51%, 88.72% and 60.38%, and the time consumption is 0.001107s, 0.001706s and 0.004125s respectively, which are the best performance in all comparable methods. Particularly, the classification accuracy rate of LIGAA on ResNet-18 doesn’t increase even with the JPEG compress defense, but decreases to 3.11%, which also significantly verifies the attack performance and robustness against the defense method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call