Abstract

AbstractAs CNN’s powerful visual processing function is widely recognized, its security has attracted people’s attention. A large number of experiments prove that CNN is extremely vulnerable to adversarial attack. Existing attack methods have better performance in white-box attack, but in actual situations, attackers can usually only perform black-box attack. The success rate of black-box attack methods is relatively low. At the same time, the most attack methods will attack all pixels of the image, which will cause too much interference in the adversarial example. To this end, we propose an enhanced attack strategy GF-Attack. We recommend distinguishing between the attack area and the non-attack area and combining the information of the flipped image during the attack. This strategy can improve the transferability of the generated adversarial examples and reduce the amount of interference. We conducted single model and ensemble models attack on eight models, including normal training and adversarial training. We compared the success rate and distance of the adversarial examples generated by the enhanced method using the GF-Attack strategy and the original method. Experiments show that the improved method by GF-Attack is superior to the original method in the black-box setting and white-box setting. Increased maximum success rate 9.13%, reduced pixel interference 404K.KeywordsAdversarial attackAdversarial exampleFlip imageGrad-CamTransferability

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call