Abstract

To date, deep learning techniques have been widely used. However, deep neural networks (DNNs) are vulnerable to adversarial attacks, which has become one of the hidden risks issues affecting system security. The adversarial sample is a perturbation input to fool the deep learning model. The inherent weakness of DNNs that lacks robustness to adversarial samples brings security problems, especially for tasks that require high reliability. This paper proposed a robustness enhancing method based on principal component analysis (PCA) and applied it to deep networks, which enhanced the ability of DNNs to resist adversarial attacks. Specifically, the proposed method firstly used PCA to downscale the clean samples, and then, chose two non-target attacks, DeepFool and FGSM, to craft adversarial samples pre-and-post downscale. Finally, by evaluating the changes in the robustness of the classifier, we draw the corresponding analytical conclusions. Experimental results on MNIST show that the proposed method makes deep networks more robust against white-box attacks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call