Abstract
The emergence of adversarial examples seriously affects the practical security deployment of convolutional neural networks. The existing attack algorithms perform brilliantly under white-box scenarios, but they show weak transferability when faced with unknown black-box models. Recent studies have revealed that models have different interests in different frequency components of images, and the low-frequency characteristics play a non-negligible role in the decision-making of models. In this article, we present the frequency enhanced momentum iterative attack, called FE-MI-FGSM. Specifically, we use multiple convolution kernels for Gaussian filtering of the image before each gradient update to push the processed image closer to the common decision boundaries of multiple models. Then, the average gradient of the white-box model to these processed images is obtained and used as the perturbation direction to generate adversarial examples with a high success rate of white-box attack and high transferability. The empirical results show that compared with the current mainstream gradient-based methods, our method performs better on both normally trained and adversarially trained models. Besides, our method can combine with gradient-based methods which integrate convergence algorithms or input transformations for the sake of reaching satisfactory improvement of the transferability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.