Abstract

Adversarial examples (AE) have become a huge threat to the security and robustness of deep neural networks (DNN). The transferability of AE reveals the vulnerability of DNN even in a black-box situation. Due to the different structure and decision boundary of DNN models, the transferability of an AE generated on one model is limited to deceive another model with a different representation ability. To this end, we propose a dynamic ensemble attack (DEA), which generates AE on several models with an Elastic Soft Attention Layer (ESAL) to boost the transferability of AE. The EASL allocates the weights of models according to the distance from the benign image to the perturbed image in the feature space and the predicted probability of the models. Compared to the ensemble attack with equal weights, DEA breaks structural of the picture and expands the distance between AE and the benign example in the feature space. The experimental results on two benchmark datasets validate that DEA has superior transferability to the traditional ensemble attack. DEA could effectively improve the black-box attack success rate. We also conduct a visual analysis of the attacking effect of AE during generation and validate the advantage of DEA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call