Abstract

Convolutional neural networks (CNNs) are widely used in computer vision, but can be deceived by carefully crafted adversarial images. In this paper, we propose an evolutionary algorithm (EA) based adversarial attack against CNNs trained on ImageNet. Our EA-based attack aims to generate adversarial images that not only achieve a high confidence probability of being classified into the target category (at least 75%), but also appear indistinguishable to the human eye in a black-box setting. These constraints are implemented to simulate a realistic adversarial attack scenario. Our attack has been thoroughly evaluated on 10 CNNs in various attack scenarios, including high-confidence targeted, good-enough targeted, and untargeted. Furthermore, we have compared our attack favorably against other well-known white-box and black-box attacks. The experimental results revealed that the proposed EA-based attack is superior or on par with its competitors in terms of the success rate and the visual quality of the adversarial images produced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call