Abstract

Link prediction, inferring the undiscovered or potential links of the graph, is widely applied in the real world. By facilitating labeled links of the graph as the training data, numerous deep learning-based link prediction methods have been studied, which have dominant prediction accuracy compared with nondeep methods. However, the threats of maliciously crafted training graphs will leave a specific backdoor in the deep model; thus, when some specific examples are fed into the model, it will make a wrong prediction, defined as a backdoor attack. It is an important aspect that has been overlooked in the current literature. In this article, we prompt the concept of a backdoor attack on link prediction and propose Link-Backdoor to reveal the training vulnerability of the existing link prediction methods. Specifically, the Link-Backdoor combines the fake nodes with the nodes of the target link to form a trigger. Moreover, it optimizes the trigger by the gradient information from the target model. Consequently, the link prediction model trained on the backdoored dataset will predict the link with a trigger to the target state. Extensive experiments on five benchmark datasets and five well-performing link prediction models demonstrate that the Link-Backdoor achieves the state-of-the-art attack success rate under both the white box (i.e., available of the target model parameter) and the black box (i.e., unavailable of the target model parameter) scenarios. In addition, we testify the attack under the defensive circumstance, and the results indicate that the Link-Backdoor still can construct a successful attack on the well-performing link prediction methods. The code and data are available at https://github.com/Seaocn/Link-Backdoor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call