Abstract
Although deep neural networks (DNNs) have advanced performance in many application scenarios, they are vulnerable to the attacks of adversarial examples that are crafted by adding imperceptible perturbations. Most of the existing adversarial attacks rely on the structure and parameters information of the attacked network. Thus, the generated adversarial examples are poor in transferability to attack black-box defense models, which makes them difficult to be used in real-world applications. In this paper, we propose an approach based on saliency distribution and data augmentation to generate transferable adversarial examples against the defense models. By optimizing perturbations over non-saliency regions, the generated adversarial examples are less sensitive to the attacked source models and have better transferability. Further, by utilizing data augmentation in generating adversarial examples, the overfitting problem on source models is alleviated in targeted attacks. Extensive experiments show that the proposed approach can generate adversarial examples with higher transferability. The source code is available at https://github.com/dongysxd/SDM-FGSM-Attack.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.