Abstract

Graph neural networks have been shown to have characteristics that make them susceptible to backdoor attacks, and many recent works have proposed feasible graph backdoor attack methods. However, existing graph backdoor attack methods only target one-to-one attack types and lack graph backdoor attack methods that can address one-to-many attack requirements. This paper is the first research work on one-to-many type graph backdoor attacks and proposes the backdoor attack method MLGB, which can achieve multi-target label attacks for GNN node classification tasks. We designed encoding mechanisms to allow MLGB to customize triggers for different target labels and ensure differentiation between triggers for different target labels through loss functions. Additionally, we designed an innovative poisoned node selection method to improve the efficiency of MLGB’s attacks further. Extensive experiments were conducted to validate MLGB’s effectiveness across multiple datasets and model architectures, demonstrating its robustness against graph backdoor attack defense mechanisms. Furthermore, ablation experiments and explainability analyses were conducted to provide deeper insights into MLGB. Our work reveals that graph neural networks are also vulnerable to one-to-many type backdoor attacks, which is important for practitioners to understand model risks comprehensively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.