Abstract

Graph neural networks (GNNs) have achieved significant success in numerous graph-based applications. Unfortunately, they are sensitive to adversarial examples generated by modifying graphs with imperceptible perturbations. Therefore, researchers develop attack models to evaluate the robustness of GNNs or design corresponding defense models. However, traditional attack models can hardly determine the importance of perturbed graph structures, where the selection of attack targets lacks explainability. Moreover, these attack models are mainly designed for certain graph-learning tasks. In this study, we propose a two-level adversarial attack framework that reconciles task- and feature-level attacks on GNNs. First, instead of using only adversarial examples, we introduce a dual-view pipeline with two task-level optimization objectives that consider the original and adversarial examples separately. We theoretically demonstrate that this simple yet powerful loss not only improves attack performance but also exhibits strong explainability. Second, we propose a feature-level attack framework based on contrastive learning in which adversarial attacks are applied to the learned features. Our theoretical results imply that contrastive learning between original and adversarial examples can destroy the representation and discriminative abilities of GNNs. Experimental results for several datasets and different GNN architectures demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call