Abstract

With the success of the graph embedding model in both academic and industry areas, the robustness of graph embedding against adversarial attack inevitably becomes a crucial problem in graph learning. Existing works usually perform the attack in a white-box fashion: they need to access the predictions/labels to construct their adversarial loss. However, the inaccessibility of predictions/labels makes the white-box attack impractical for a real graph learning system. This paper promotes current frameworks in a more general and flexible sense -- we consider the ability of various types of graph embedding models to remain resilient against black-box driven attacks. We investigate the theoretical connection between graph signal processing and graph embedding models, and formulate the graph embedding model as a general graph signal process with a corresponding graph filter. Therefore, we design a generalized adversarial attack framework: GF-Attack. Without accessing any labels and model predictions, GF-Attack can perform the attack directly on the graph filter in a black-box fashion. We further prove that GF-Attack can perform an effective attack without assumption on the number of layers/window-size of graph embedding models. To validate the generalization of GF-Attack, we construct GF-Attack on five popular graph embedding models. Extensive experiments validate the effectiveness of GF-Attack on several benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call