Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by imposing mild perturbation on clean ones. An intriguing property of adversarial examples is that they are efficient among different DNNs. Thus transfer-based attacks against DNNs become an increasing concern. In this scenario, attackers devise adversarial instances based on the local model without feedback information from the target one. Unfortunately, most existing transfer-based attack methods only employ a single local model to generate adversarial examples. It results in poor transferability because of overfitting to the local model. Although several ensemble attacks have been proposed, the transferability of adversarial examples merely have a slight increase. Meanwhile, these methods need high memory cost during the training process. To this end, we propose a novel attack strategy called stochastic serial attack (SSA). It adopts a serial strategy to attack local models, which reduces memory consumption compared to parallel attacks. Moreover, since local models are stochastically selected from a large model set, SSA can ensure that the adversarial examples do not overfit specific weaknesses of local source models. Extensive experiments on the ImageNet dataset and NeurIPS 2017 adversarial competition dataset show the effectiveness of SSA in improving the transferability of adversarial examples and reducing the memory consumption of the training process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call