Abstract

Graph data has been widely used to represent data from various domain, e.g., social networks, recommendation system. With great power, the GNN models, usually as valuable properties of their owners, also become attractive targets of the adversary who covets to steal them. While existing works show that simple deep neural networks can be reproduced by so-called Model Extraction Attacks, how to extract a GNN model has not been explored. In this paper, we exploit the threat of model extraction attacks against GNN models. Unlike ordinary attacks which obtain model information via only the input-output query pairs, we utilize both the node queries and the graph structure to extract the GNNs. Furthermore, we consider the stealthiness of the attack and propose to generate legitimate queries so the extraction can be applied discreetly. We implement our attack by leveraging the responses of these queries, as well as other accessible knowledge, e.g., neighbor connectives of the queried nodes. By evaluating over three real-world datasets, our attack is shown to effectively produce a surrogate model with more than 80% equivalent predictions as the target model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.