Abstract

Adversarial examples have raised great concerns about the security of deep learning models. Substitute training makes it possible to conduct black-box substitute attacks in real-world scenarios where the attacker does not need access to the structure, parameters, and training set of the target model. However, existing substitute training methods require a large number of queries on the target model and suffer from low attack success rates. To alleviate these problems, we propose a novel black-box adversarial attack method, named substitute meta-learning (SML), which combines meta-learning with the training of the substitute model. Different from existing substitute training methods that rely on data augmentation tactics or refined loss functions, we aim to boost the learning efficiency of the substitute model to improve training efficiency and attack performance. Specifically, we introduce meta-learning to enable the substitute model to learn the knowledge of the target model using a few queries. Extensive experiments are conducted on MNIST and CIFAR-10 datasets. The experimental results show that the proposed SML can improve the attack success rate from 46.1% to 61.3% while requiring fewer queries.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.