Abstract

Deep neural networks are especially vulnerable to adversarial example, which could mislead classifiers by adding imperceptible perturbations. While previous researches can effec- tively generate adversarial example in the white-box environment, it is a challenge to produce threatening adversarial example in the black-box environment till now, where attackers only have access to obtain the predicts of models to inputs. To conquer the problem, a feasible solution is harnessing the transferability of adversarial examples and the property makes adversarial examples can successfully attack multiple models simultaneously. Therefore, this paper explores the way to enhance transfer- ability of adversarial examples and then propose a Nadam- based iterative algorithm (NAI-FGM). NAI-FGM can achieve better convergence and effectively correct the deviation so as to boost the transferability of adversarial examples. To validate the effectiveness and transferability of adversarial examples generated by our proposed NAI-FGM, this study conducts the attacks on various single models and ensemble models on open Cifar-10 and Cifar-100. Experiment results exhibit the superiority of NAIFGM that achieves higher transferability than state-of- the-art methods on average against black-box models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.