Abstract
It is becoming notorious several types of adversaries based on their threat model leverage vulnerabilities to compromise a machine learning system. Therefore, it is important to provide robustness to machine learning algorithms and systems against these adversaries. However, there are only a few strong countermeasures, which can be used in all types of attack scenarios to design a robust artificial intelligence system. This paper is structured and comprehensive overview of the research on attacks to machine learning systems and it tries to call the attention from developers and software houses to the security issues concerning machine learning.
Highlights
It is becoming notorious several types of adversaries based on their threat model leverage vulnerabilities to compromise a machine learning system
This survey is mainly based on Polyakov work [1] and it tries to provide a structured and comprehensive overview of the research on attacks to Machine Learning (ML) products
Polyakov [1] organizes the attacks on ML models depending on the actual goal of an attacker (Espionage, Sabotage, Fraud) and the stages of machine learning pipeline, or can be called attacks on algorithm and attacks on a model respectively
Summary
Abstract—It is becoming notorious several types of adversaries based on their threat model leverage vulnerabilities to compromise a machine learning system. It is important to provide robustness to machine learning algorithms and systems against these adversaries. There are only a few strong countermeasures, which can be used in all types of attack scenarios to design a robust artificial intelligence system. This paper is structured and comprehensive overview of the research on attacks to machine learning systems and it tries to call the attention from developers and software houses to the security issues concerning machine learning
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have