Recently, a graph neural network (GNN) was proposed to analyze various graphs/networks, which has been proven to outperform many other network analysis methods. However, it is also shown that such state-of-the-art methods suffer from adversarial attacks, i.e., carefully crafted adversarial networks with slight perturbation on clean one may invalid these methods on lots of applications, such as network embedding, node classification, link prediction, and community detection. Adversarial training has been testified as an efficient defense strategy against adversarial attacks in computer vision and graph mining. However, almost all the algorithms based on adversarial training focus on global defense through overall adversarial training. In a more practical scene, certain users would be targeted to attack, i.e., specific labeled users. It is still a challenge to defend against target node attack by existing adversarial training methods. Therefore, we propose smoothing adversarial training (SAT) to improve the robustness of GNNs. In particular, we analytically investigate the robustness of graph convolutional network (GCN), one of the classic GNNs, and propose two smooth defensive strategies: smoothing distillation and smoothing cross-entropy loss function. Both of them smooth the gradients of GCN and, consequently, reduce the amplitude of adversarial gradients, benefiting gradient masking from attackers in both global attack and target label node attack. The comprehensive experiments on five real-world networks testify that the proposed SAT method shows state-of-the-art defensibility against different adversarial attacks on node classification and community detection. Especially, the average attack success rate of different attack methods can be decreased by about 40% by SAT at the cost of tolerable embedding performance decline of the original network.
Read full abstract