Abstract

Machine Learning (ML) models are progressively deployed in many real-world applications to perform a wide range of tasks, but are exposed to the security and privacy threats which aim to infer the details and even steal the functionality of the ML models. Despite extensive attacking efforts which rely on white-box or gray-box access, how to perform attacks with black-box access continues to be elusive. Aspiring to fill this gap, we move one step further and present ML-Stealer that can steal the functionality of any type of ML models with mere black-box access. With two algorithm designs, namely, synthetic data generation and replica model construction, ML-Stealer can construct a deep neural network (DNN)-based replica model which has the similar prediction functionality to the victim ML model. ML-Stealer does not require any knowledge about the victim model, nor does it enforce the access to statistical information or samples of the victim's training data. Experiment results demonstrate that ML-Stealer can achieve the consistent prediction results with the victim model of an averaged testing accuracy of 85.6%, and up to 93.6% at best.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call