Abstract

Traditional machine learning techniques may suffer from evasion attack in which an attacker intends to have malicious samples to be misclassified as legitimate at test time by manipulating the samples. It is crucial to evaluate the security of a classifier during the development of a robust system against evasion attack. Current security evaluation for Support Vector Machine (SVM) is very time-consuming, which largely decreases its availability in applications with big data. In this paper, we propose a fast security evaluation of support vector machine against evasion attack. It calculates the security of an SVM by the average distance between a set of malicious samples and the hyperplane. Experimental results show strong correlation between the proposed security evaluation and the current one. Current security measure min-cost-mod runs 24,000 to 551,000 times longer than our proposed one on six datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call