Abstract

Neural network technology has made remarkable achievements in computer vision, speech recognition, natural language processing, and other fields. However, the problem of the interpretability of the neural network model makes its application in real situations have potential security risks. In recent years, many studies have pointed out that using Adversarial example technology to make extremely weak perturbations of the input sample can mislead most mainstream neural network models, such as fully connected neural networks and convolutional neural networks, to make wrong judgments. This phenomenon reveals that the existing neural network technology lacks security and robustness. The study of adversarial example technology is of great significance to improve the safety and robustness of neural network models and to promote the researcher's understanding of the learning process of neural network models deeply. Studying adversarial examples of migration is an important research field in adversarial attacks. Researchers attempt to summarize the rules of adversarial attacks by exploring the migration of adversarial examples, thus establishing a robust model in the deep learning area. In this paper, the migration of adversarial examples in image classification is studied to provide analytical data for summarizing the characteristics of adversarial examples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call