Abstract

AbstractDeep neural networks are easily disturbed by adversarial examples, which bring many security problems to the application of artificial intelligence technology. Adversarial examples and adversarial attacks can be useful tools for evaluating the robustness of deep learning models before they are deployed. However, most of the adversarial examples generated by a single network model have weak transferability, which can only have a good attack effect on the network model, and it is difficult to successfully attack other network models. In response to this problem, this paper studies the adversarial examples generated by the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and their transferability on the ImageNet dataset. Meantime, the paper proposes a pixel-level image fusion method to enhance the transferability. The adversarial examples generated by the method can be better applied to attack a variety of neural network models. These adversarial examples with strong transferability can be used as a benchmark to measure the robustness of DNN models and defense methods.KeywordsAdversarial attacksAdversarial examplesTransferabilityPixel-level image fusionDeep neural network models

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call