Abstract

Fooling deep neural networks (DNNs) with black-box optimization has become a popular adversarial attack fashion, as the prior structural knowledge of DNNs is always unknown. Nevertheless, recent black-box adversarial attacks may struggle to balance their attack ability and visual quality of the generated adversarial examples (AEs) in tackling high-resolution images. In this paper, we propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization, termed LMOA. By considering the spatial semantic information of images, we first take advantage of the attention map to determine the perturbed pixels. Instead of attacking the entire image, reducing the perturbed pixels with the attention mechanism can help to avoid the notorious curse of dimensionality and thereby improve the performance of attacking. Second, a large-scale multiobjective evolutionary algorithm traverse the reduced pixels in the salient region. Benefiting from its characteristics, the generated AEs can fool target DNNs while being invisible by human vision. Extensive experimental results have verified the effectiveness of the proposed LMOA on the ImageNet data set. More importantly, it is more competitive to generate high-resolution AEs with the better visual quality than the existing black-box adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call