Abstract

The use of neural networks has produced outstanding results in a variety of domains, including computer vision and text mining. Numerous investigations in recent years have shown that using adversarial attacks technology to perturb the input samples weakly can mislead most mainstream neural network models, for example Fully Connected Neural Networks (FCNN) and Convolutional Neural Networks (CNN), to make wrong judgment results. Adversarial attacks can help researchers discover the potential defects of neural network models in terms of robustness and security so that people can comprehend the neural network models' learning process better and solve the neural network models' interpretability. However, suppose an adversarial attack is performed on a non-deep learning model. In that case, the results are very different from the deep learning model. This paper first briefly outlines the existing adversarial example technology; then selects the CIFAR10 dataset as the test data and LeNet, ResNet18, and VGG16 as the test model according to the technical principle; then uses the Fast Gradient Sign Attack (FGSM) method to conduct attack experiments with the CNNs and traditional machine learning algorithms like K-Nearest Neighbors (KNN) and Support Vector Machine (SVM); then analyze the experimental results and find that the adversarial example technology is specific to the deep learning model, but it cannot be completely denied that adversarial examples have no attack effect on traditional machine learning models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call