Abstract

SummaryMachine learning augments today's intrusion detection system (IDS) capability to cope with unknown malware. However, if an attacker gains partial knowledge about the IDS' classifier, he can create a modified version of his malware, which can evade detection. In this article we present an IDS on the basis of various classifiers using system calls, executed by the inspected code as features. We then present a camouflage algorithm that is used to modify malicious code to be classified as benign, while preserving the code's functionality, for decision tree and random forest classifiers. We also present transformations to the classifier's input, to prevent this camouflage ‐ and a modified camouflage algorithm that overcomes those transformations. Our research shows that it is not enough to provide a decision tree based classifier with a large training set to counter malware. One must also be aware of the possibility that the classifier would be fooled by a camouflage algorithm, and try to counter such an attempt with techniques such as input transformation or training set updates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call