Abstract

The vulnerability of Graph Convolutional Networks (GCNs) to adversarial attacks, such as injecting computational noise to the input data, has become an issue in recent years. Thus, it is recognized as an important task to detect these attacks before or during model building. In this paper, focusing on adversarial attacks on network data for node classification tasks, we propose to build an attack detection model using latent information obtained from intermediate or hidden layers in GCN using autoencoders. More specifically, by employing various autoencoders including a standard autoencoder with node vectors as input units, an ego-network-based autoencoder, and some combinations of them as feature extractors, we build attack detection models of supervised SVM, One class SVM, and isolation forest. The results of comparative experiments with baseline methods using real network data show the effectiveness of the proposed framework on the detection of adversarial attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call