Abstract

Model extraction attacks are considered to be a significant avenue of vulnerability in machine learning. In model extraction attacks, the attacker repeatedly queries a victim model so as to train a surrogate model that can mimic the output of the victim model. Graph neural networks (GNNs), which are designed to process graph data, were previously thought to be less sensitive to such attacks. This is because, in black-box settings, attackers only have limited access to the victim model. Also, the number of queries any one user can make within a given time window is usually restricted and within this finite number of responses some of the information may contain errors. Yet training a useful surrogate model not only requires a substantial number of queries, but incorrect node labels appearing in the victim GNN’s responses is highly problematic. However, in this paper, we demonstrate that GNNs may have a similar vulnerability to model extraction attacks as a normal machine learning model. Our proposed method of extraction addresses the issue of incorrect node labels while also significantly reducing the number of required queries required to train a well-performing model. With this method, GNN extraction attacks are actually highly practical in the real world. Specifically, our proposed methodology incorporates an edge prediction module that introduces potential edges into the original graph data. This module links incorrectly labeled nodes with more accurately labeled ones, thereby mitigating the impact of incorrect labels. And, by increasing the number of possible edges, our approach enables the surrogate model to better leverage the graph’s structure, enhancing the contribution of the labeled nodes and allowing the model extraction attack to be executed with fewer queries. Our experiments demonstrate a significant performance improvement over existing approaches, especially in a black-box setting. As such, this research shows that GNNs are also vulnerable to model extraction attacks in real-world scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.