Abstract

Graph convolutional networks (GCNs) are fundamental graph neural networks used for solving node classification problems in graph-structured data. GCNs have been reported to be vulnerable to adversarial example attacks, posing a severe threat to their practical applications. In this study, we formulate targeted universal adversarial example attacks performed by injecting a single fake node into a graph. The proposed methods eliminate an unrealistic assumption required for previously proposed universal adversarial example attacks on GCNs. We show that GCNs are highly vulnerable to universal adversarial examples generated by only injecting a single node through experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call