Abstract

Machine Learning has exhibited great performance in several practical application domains such as computer vision, natural language processing, automatic pilot and so on. As it becomes more and more widely used in practice, its security issues attracted more and more attentions. Previous research shows that machine learning models are very vulnerable when facing different kinds of adversarial attacks. Therefore, we need to evaluate the security of different machine learning models under different attacks. In this paper, we aim to provide a security comparison method for different machine learning models. We firstly classify the adversarial attacks into three classes by their attack targets, respectively attack on test data, attack on train data and attack on model parameters, and give subclasses under different assumptions. Then we consider support vector machine (SVM), neural networks with one hidden layer (NN), and convolution neural networks (CNN) as examples and launch different kinds of attacks on them for evaluating and comparing model securities. Additionally, our experiments illustrate the effects of concealing actions launched by the adversary.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call