Abstract

The transferability of adversarial examples under the black-box attack setting has attracted extensive attention from the community. Input transformation is one of the most effective approaches to improve the transferability among all methods proposed recently. However, existing methods either only slightly improve transferability or are not robust to defense models. We delve into the generation process of adversarial examples and find that existing input transformation methods tend to craft adversarial examples by transforming the entire image, which we term image-level transformations. This naturally motivates us to perform pixel-level transformations, i.e., transforming only part pixels of the image. Experimental results show that pixel-level transformations can considerably enhance the transferability of the adversarial examples while still being robust to defense models. We believe that pixel-level transformations are more fine-grained than image-level transformations, and thus can achieve better performance. Based on this finding, we propose the pixel-level scale variation (PSV) method to further improve the transferability of adversarial examples. The proposed PSV randomly samples a set of scaled mask matrices and transforms the part pixels of the input image with the matrices to increase the pixel-level diversity. Empirical evaluations on the standard ImageNet dataset demonstrate the effectiveness and superior performance of the proposed PSV both on the normally trained (with the highest average attack success rate of 79.2%) and defense models (with the highest average attack success rate of 61.4%). Our method can further improve transferability (with the highest average attack success rate of 88.2%) by combining it with other input transformation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call