Abstract
Unsupervised graph learning techniques have garnered increasing interest among researchers. These methods employ the technique of maximizing mutual information to generate representations of nodes and graphs. We show that these methods are susceptible to backdoor attacks, wherein the adversary can poison a small portion of unlabeled graph data (e.g., node features and graph structure) by introducing triggers into the graph. This tampering disrupts the representations and increases the risk to various downstream applications. Previous backdoor attacks in supervised learning primarily operate directly on the label space and may not be suitable for unlabeled graph data. To tackle this challenge, we introduce GRBA,11https://github.com/fbd3/GRBA.git. a gradient-based first-order backdoor attack method. To the best of our knowledge, this constitutes a pioneering endeavor in investigating backdoor attacks within the domain of unsupervised graph learning. The initiation of this method doesn’t necessitate prior knowledge of downstream tasks, as it directly focuses on representations. Furthermore, it is versatile and can be applied to various downstream tasks, including node classification, node clustering and graph classification. We evaluate GRBA on state-of-the-art unsupervised learning models, and the experimental results substantiate the effectiveness and evasiveness of GRBA in both node-level and graph-level tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.