Abstract

Adversarial examples have emerged as increasingly severe threats for deep neural networks. Recent works have revealed that these malicious samples can transfer across different neural networks, and effectively attack other models. The state-of-the-art methodologies leverage Fast Gradient Sign Method to generate obstructing textures, which can cause neural networks to make incorrect inferences. However, the over-reliance on task-specific loss functions makes the adversarial examples less transferable across networks. Moreover, recent de-noising based adaptive defences provide promising performance against aforementioned attacks. Therefore, to achieve better transferability and attack effectiveness, we propose a novel attack, referred to as the Fabricate-Vanish (FV) attack, which is able to erase benign representations and generate obstruction textures simultaneously. The proposed FV attack treats the adversarial example transferability as latent contribution for each layer of deep neural networks, and maximizes the attack performance by balancing transferability and task specific loss function. Our experimental results on ImageNet show that the proposed FV attack achieves the best attack performance and better transferability by degrading the accuracy of classifiers 3.8% more on average compared to the state-of-the-art attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.