Abstract

Research interest in demonstrating vulnerability of Machine Learning (ML) algorithms against sophisticated Adversarial Machine Learning (AML) perturbation attacks has become more prominent in recent years. Adversarial attacks perturb dataset instances by finding the nearest decision boundary and moving the instance values towards the boundary. Thus, a popular challenge in this field is combating such adversarial attacks by increasing model accuracy. Making a model more robust often requires the ML engineer to have preemptive knowledge not only that an adversarial attack will occur, but also which attack will occur. This work is the first to reinforce a Neural Network (NN) model in a network security environment against AML attacks by leveraging an evidential classification approach. Evidential approaches allow for measuring an extra degree of insight of uncertainty between features to enable classification of ambiguous instances as uncertain. Crucially, the proposed approach does not require any training of perturbed datasets or any knowledge that an adversarial attack may take place. Recent advances in making ML models more robust against single-step adversarial attacks have been greatly successful, but researchers have found greater issue in making their models more robust against complex, iterative attacks. The proposed approach is evaluated using a modern network security dataset, and compared against a conventional Bayesian NN. Rather than training a model to increase Accuracy, the proposed approach aims to reduce the misclassification rate of perturbed data. By allowing instances in a dataset to be classified as uncertain, comparing against a conventional NN, the proposed approach produces results that decrease the misclassification rates on the two perturbed malicious classes from 70.53% to 13.09%, and from 99.67% to 1.33%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call