Abstract

With the development of machine learning (ML) technology, various online intelligent services use ML models to provide predictions. However, attacker may obtain privacy information of the model through interaction with online services. Model inversion attacks (MIA) is a privacy stealing method that utilizes ML models output values to reconstruct input values. In particular, an indispensable step of implementing proposed MIA approaches is that the attacker query the auxiliary datasets entirely. However, in reality, it will be inefficient to transfer huge datasets to online services to get prediction values of inference models. More seriously, the huge transmission may cause the administrator’s active defense. In this paper, we propose a novel MIA scheme which reduce queries on auxiliary datasets, by utilizing latent information of primitive models as high dimension features. We systematically evaluate our inversion approach in convolutional neural networks (CNN) classifier on LFW, pubFig, MNIST datasets. The experimental results show that even with a few queries of the inference model, our inversion approach still work accurately and outperforms than previous approaches. As conclusion, our method proves that implementing MIA does not require querying all auxiliary data on the classifier model, making it more difficult for the administrator to defend against the attack and elicit more investigations for privacy-preserving.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.