Abstract

Abstract Adversarial examples are often used to test and evaluate the security and robustness of image classification models. Though adversarial attacks under white-box setting can achieve a high attack success rate, due to overfitting, the success rate of black-box attacks is relatively low. To this end, this paper proposes diversified input strategies to improve the transferability of adversarial examples. In this method, various transformation methods are applied to randomly transform the original image multiple times, thereby generating a batch of transformed images. Then, in the process of back-propagation, the loss function gradient of the transformed images is calculated, and a weighted average of the obtained gradient values is performed to generate adversarial perturbation, which is iteratively added to the original image to generate adversarial examples. Meanwhile, by increasing the variety of data augmentation transformation types and the number of input images, the proposed method effectively alleviates overfitting and improves the transferability of adversarial examples. Extensive experiments on the ImageNet dataset indicate that the proposed method can perform black-box attacks better than benchmark methods, with an average of 97.2% success rate attacking multiple models simultaneously.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.