Abstract
Adversarial binaries are executable files that have been altered without loss of function by an AI agent in order to deceive malware detection systems. Progress in this emergent vein of research has been constrained by the complex and rigid structure of executable files. Although prior work has demonstrated that these binaries deceive a variety of malware classification models which rely on disparate feature sets, a consensus as to the best approach has not been reached, either in terms of the optimization algorithms or the instrumentation methods. Although inconsistencies in the data sets, target classifiers, and functionality verification methods make head-to-head comparisons difficult, we extract lessons learned and make recommendations for future research.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.