Abstract

AbstractDeep Neural Networks (DNNs) have demonstrated excellent performance in many fields. However, existing studies have shown that deep neural networks are very susceptible to well-designed adversarial samples. Adversarial samples cause the system to make incorrect classifications or predictions and may lead to security risks in the real world. Many adversarial attacks methods for making adversarial samples have been proposed. However, the excessive perturbation of most attack methods causes adversarial perturbations to be visible in human vision. In this paper, we proposed pixel-level adversarial attack on attention, named PlAA, which aims to attack a very few pixels in the attention area of DNNs to generate adversarial samples to achieve a high attack success rate and better hide adversarial disturbance. The experimental results show that in the single-pixel attack scenario, our PlAA attack method has an improvement up to 34.16% in the attack success rate compared with the existing one-pixel attack methods. In the multi-pixel attack scenario, compared with the existing attack on attention (AoA) methods, PlAA can better hide the adversarial disturbance while ensuring the same high attack success rate.KeywordsAdversarial attackAttention mechanismPixel-level attack

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call