Abstract

Data Poisoning Attacks (DPA) represent a sophisticated technique aimed at distorting the training data of machine learning models, thereby manipulating their behavior. This process is not only technically intricate but also frequently dependent on the characteristics of the victim (target) model. To protect the victim model, the vast number of DPAs and their variants make defenders rely on trial and error techniques to find the ultimate defence solution which is exhausting and very time-consuming. This paper comprehensively summarises the latest research on DPAs and defences, proposes a DPA characterizing model to help investigate adversary attacks dependency on the victim model, and builds a DPA roadmap as the path navigating to defence. Having the roadmap as an applied framework that contains DPA families sharing the same features and mathematical computations will equip the defenders with a powerful tool to quickly find the ultimate defences, away from the exhausting trial and error methodology. The roadmap validated by use cases has been made available as an open access platform, enabling other researchers to add in new DPAs and update the map continuously.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call