Abstract

Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.