Abstract
Deep neural networks can be easily fooled by the adversarial example, which is a specially crafted example with subtle and intentional perturbations. A plethora of papers have proposed to use filters to effectively defend against adversarial example attacks. However, we demonstrate that the automatic filter-based defenses may not be reliable. In this article, we present URL2AED, Using a Reinforcement Learning scheme TO escape the automatic filter-based Adversarial Example Defenses. Specifically, URL2AED uses a specially crafted policy gradient reinforcement learning (RL) algorithm to generate adversarial examples (AEs) that can escape automatic filter-based AE defenses. In particular, we properly design reward functions in policy-gradient RL for targeted attacks and non-targeted attacks, respectively. Furthermore, we customize training algorithms to reduce the possible action space in policy-gradient RL to accelerate URL2AED training while still ensuring that URL2AED generates successful AEs. To demonstrate the performance of the proposed URL2AED, we conduct extensive experiments on three public datasets in terms of different perturbation degrees of parameter, different filter parameters, transferability, and time consumption. The experimental results show that URL2AED achieves high attack success rates for automatic filter-based defenses and good cross-model transferability.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.