Abstract

Machine learning (ML) has been universally adopted for automated decisions in a variety of fields, including recognition and classification applications, recommendation systems, natural language processing, and so on. However, in light of high expenses on training data and computing resources, recent years have witnessed a rapid increase in outsourced ML training, either partially or completely, which provides vulnerabilities for adversaries to exploit. A prime threat in training phase is called poisoning attack, where adversaries strive to subvert the behavior of machine learning systems by poisoning training data or other means of interference. Although a growing number of relevant studies have been proposed, the research among poisoning attack is still overly scattered, with each paper focusing on a particular task in a specific domain. In this survey, we summarize and categorize existing attack methods and corresponding defenses, as well as demonstrate compelling application scenarios, thus providing a unified framework to analyze poisoning attacks. Besides, we also discuss the main limitations of current works, along with the corresponding future directions to facilitate further researches. Our ultimate motivation is to provide a comprehensive and self-contained survey of this growing field of research and lay the foundation for a more standardized approach to reproducible studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call