Abstract

Graph neural networks are vulnerable to adversarial attacks. Therefore, this paper generalizes existing graph neural network adversarial attacks as a contradictory data hypothesis. Attack methods based on this hypothesis add perturbations to the training data, making it difficult to fit the training data and hence disturbing training. The contradictory data hypothesis cannot fit such cases well, causing the model to overfit the training set and be unable to generalize to the test set. This paper proposes a parameter discrepancy hypothesis for adversarial attacks against graph data, and shows that model training parameters differ significantly before and after the attack. A new attack model is consequently established using Cook distance. Extensive experiments verified the parameter discrepancy hypothesis rationality, Cook distance effectiveness, and the proposed attack method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call