Abstract

In recent years, Deep neural networks (DNN) have caused a huge sensation, showing outstanding capabilities in the fields of computer vision tasks and natural language processing. It has been gradually discovered that DNN are easily disturbed by adversarial samples, which are formed by superimposing the original samples and tiny perturbations. Although these tiny perturbations are imperceptible to the naked eye, they can significantly interfere with the model output. In security-related fields, adversarial examples bring huge hidden dangers to the deployment of DNN systems. When testing and evaluating DNN systems, researchers usually study the robustness of deep neural networks using transfer-based attacks, which are black-box attacks using carefully crafted adversarial examples from the source model. Adversarial samples with strong transfer ability are more aggressive to black-box models, so how to improve the transfer ability of adversarial samples has attracted the attention of many scholars. Since the existing methods based on adversarial transformation to improve the transfer ability of adversarial samples are more complicated to train and cost to attack, this paper uses a cycle-consistent generative adversarial network to implement adversarial transformation, which reduces the attack cost and training cost. At the same time, our extensive experiments on the CIFAR series datasets verify the superiority of this generation method in improving the transfer ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call