Abstract

The cognitive systems along with the machine learning techniques have provided significant improvements for many applications. However, recent adversarial attacks, such as data poisoning, evasion attacks, and exploratory attacks, have shown to be able to either cause the machine learning methods to misbehave, or leak sensitive model parameters. In this work, we have devised a prototype of a malware cognitive system, called DeepArmour, which performs robust malware classification against adversarial attacks. At the heart of our method is a voting system with three different machine learning malware classifiers: random forest, multi-layer perceptron, and structure2vec. In addition, DeepArmour applies several adversarial countermeasures, such as feature reconstruction and adversarial retraining to strengthen the robustness. We tested DeepArmour on a malware execution trace dataset, which has 12, 536 malware in five categories. We are able to achieve 0.989 accuracy with 10-fold cross validation. Further, to demonstrate the ability of combating adversarial attacks, we have performed a white-box evasion attack on the dataset and showed how our system is resilient to such attacks. Particularly, DeepArmour is able to achieve 0.675 accuracy for the generated adversarial attacks which are unknown to the model. After retraining with only 10% adversarial samples, DeepArmour is able to achieve 0.839 accuracy

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.