Abstract

Deep learning-based models have demonstrated exceptional performances in diverse fields. However, recent research has revealed that adversarial attacks and minor input perturbations may easily deceive DNNs. Graph Neural Networks (GNNs) inherit this weakness. An opponent can persuade GNNs to generate inaccurate predictions by influencing a few edges in the graph. It results in severe consequences of adopting GNNs in safety-critical applications. The research focus has shifted in recent years to make GNNs more robust to adversarial attacks. This article proposes GNN-Adv, a novel approach for defending against numerous attacks that disturb the graph structure during training. Experiments demonstrate that GNN-Adv surpasses current peer approaches by an average of 15 % across five GNN approaches, four datasets, and three defense techniques. Remarkably, GNNs-Adv can successfully restore their current performance in the face of terrifying, directly targeted attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call