Abstract

Graph Attention Networks(GATs) are useful deep learning models to deal with graph data. However, recent works show that the classical GAT is vulnerable to adversarial attacks. In general, the information aggregated by GAT can be divided into two categories: one type of information has a positive effect and comes from neighbor nodes with the same label, while the other type has a negative effect and comes from neighbor nodes with different labels. Adversarial attacks can modify the graph’s features by adding or deleting edges, increasing the influence of nodes with the same label, and decreasing the influence of nodes with different labels, which degrades GAT dramatically. Therefore, how to enhance the robustness of GAT is a critical problem. Robust GAT (RoGAT) is proposed in this paper to improve the robustness of GAT based on the revision of the attention mechanism. RoGAT adjusts the effect of the positive and negative neighbors based on an extra dynamic attention score, which is computed by the Laplacian regularization. It generates an extra dynamic attention score progressively and improves the robustness. Firstly, RoGAT revises the edge‘s weight based on the smoothness assumption which is quite common for ordinary graphs. Secondly, RoGAT further revises the features to suppress features’ noise. Then, an extra attention score is generated by the dynamic edge’s weight and can be used to reduce the impact of adversarial attacks. Different experiments against targeted and untargeted attacks on citation data demonstrate that RoGAT outperforms most of the recent defensive methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call