Abstract

Machine learning algorithms provide the ability to quickly adapt and find patterns in large diverse data sources and therefore are a potential asset to application developers in enterprise systems, networks, and security domains. They make analyzing the security implications of these tools a critical task for machine learning researchers and practitioners alike, spawning a new subfield of research into adversarial learning for security-sensitive domains. The work presented in this book advanced the state of the art in this field of study with five primary contributions: a taxonomy for qualifying the security vulnerabilities of a learner, two novel practical attack/defense scenarios for learning in real-world settings, learning algorithms with theoretical guarantees on training-data privacy preservation, and a generalization of a theoretical paradigm for evading detection of a classifier. However, research in adversarial machine learning has only begun to address the field's complex obstacles—many challenges remain. These challenges suggest several new directions for research within both fields of machine learning and computer security. In this chapter we review our contributions and list a number of open problems in the area. Above all, we investigated both the practical and theoretical aspects of applying machine learning in security domains. To understand potential threats, we analyzed the vulnerability of learning systems to adversarial malfeasance. We studied both attacks designed to optimally affect the learning system and attacks constrained by real-world limitations on the adversary's capabilities and information.We further designed defense strategies, which we showed significantly diminish the effect of these attacks. Our research focused on learning tasks in virus, spam, and network anomaly detection, but also is broadly applicable across many systems and security domains and has farreaching implications to any system that incorporates learning. Here is a summary of the contributions of each component of this book followed by a discussion of open problems and future directions for research. Framework for Secure Learning The first contribution discussed in this book was a framework for assessing risks to a learner within a particular security context (see Table 3.1). The basis for this work is a taxonomy of the characteristics of potential attacks. From this taxonomy (summarized in Table 9.1), we developed security games between an attacker and defender tailored to the particular type of threat posed by the attacker.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call