Abstract

Recent research shows that graph neural networks (GNNs) are easy to receive disruptions due to the lack of robustness, the phenomenon that poses a serious security threat. Currently, most efforts to attack GNNs mainly use gradient information to guide the attacks. However, the unreliability of gradient information, and the perceptibility of adversarial examples pose challenges that impede further progress in researching on graph adversarial attacks. From the unreliability of gradient information, we propose a Graph Distance Topological Consistency (GDTC). The scheme introduces graph connectivity, geodesic distance, cosine similarity, and Minkowski distance to construct the similarity matrices of the input space and embedding space of the surrogate model. The difference between the two similarities matrices is constrained during the training process of the surrogate model so that the surrogate model fully learns the topology of the original graph. From adversarial examples perceptibility, we propose attack loss with homogeneity restriction. Experiments show that GDTC can study the topological information of the original graph, enhance the reliability of gradient information, and significantly boost attack performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.