Abstract
Adversarial binaries are executable files that have been altered without loss of function by an AI agent in order to deceive malware detection systems. Progress in this emergent vein of research has been constrained by the complex and rigid structure of executable files. Although prior work has demonstrated that these binaries deceive a variety of malware classification models which rely on disparate feature sets, a consensus as to the best approach has not been reached, either in terms of the optimization algorithms or the instrumentation methods. Although inconsistencies in the data sets, target classifiers, and functionality verification methods make head-to-head comparisons difficult, we extract lessons learned and make recommendations for future research.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have