Abstract

Graph augmentation is the key component to reveal instance-discriminative features of a graph as its rationale – an interpretation for it – in graph contrastive learning (GCL). And existing rationale-aware augmentation mechanisms in GCL frameworks roughly fall into two categories and suffer from inherent limitations: (1) non-heuristic methods with the guidance of domain knowledge to preserve salient features, which require expensive expertise and lack generality, or (2) heuristic augmentations with a co-trained auxiliary model to identify crucial substructures, which face not only the dilemma between system complexity and transformation diversity, but also the instability stemming from the co-training of two separated submodels. Inspired by recent studies on transformers, we propose S elf-attentive R ationale guided G raph C ontrastive L earning (SR-GCL), which integrates rationale generator and encoder together, leverages the self-attention values in transformer module as a natural guidance to delineate semantically informative substructures from both node- and edge-wise perspectives, and contrasts on rationale-aware augmented pairs. On real-world biochemistry datasets, visualization results verify the effectivenes and interpretability of self-attentive rationalization, and the performance on downstream tasks demonstrates the state-of-theart performance of SR-GCL for graph model pre-training. Codes are available at https://github.com/lsh0520/SR-GCL .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call