Abstract

Deep learning has shown impressive performance in numerous applications. However, recent studies have found that deep learning models are vulnerable to adversarial attacks, where the attacker adds imperceptible perturbations into benign samples to induce misclassifications. Adversarial attacks in the digital domain focus on constructing imperceptible perturbations. However, they are always less effective in the physical world because the perturbations may be destroyed when captured by the camera. Most physical adversarial attacks require adding invisible adversarial features (e.g., a sticker or a laser) to the target object, which may be noticed by human eyes. In this work, we propose to employ image transformation to generate more natural adversarial samples in the physical world. Concretely, we propose two attack algorithms to satisfy different attack goals: <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Efficient-AATR</i> employs a greedy strategy to generate adversarial samples with fewer queries; <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Effective-AATR</i> employs an adaptive particle swarm optimization algorithm to search for the most effective adversarial samples within the given the number of queries. Extensive experiments demonstrate the superiority of our attacks compared with state-of-the-art adversarial attacks under mainstream defenses.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.