Abstract

Machine learning (ML) algorithms provide better performance than traditional algorithms in various applications. However, some unknown flaws in ML classifiers make them sensitive to adversarial examples generated by adding small but fooled purposeful distortions to natural examples. This paper aims to investigate the transferability of adversarial examples generated on a sparse and structured dataset and the ability of adversarial training in resisting adversarial examples. The results demonstrate that adversarial examples generated by DNN can fool a set of ML classifiers such as decision tree, random forest, SVM, CNN and RNN. Also, adversarial training can improve the robustness of DNN in terms of resisting attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call