Graph contrastive learning, which to date has always been guided by node features and fixed-intrinsic structures, has become a prominent technique for unsupervised graph representation learning through contrasting positive-negative counterparts. However, the fixed-intrinsic structure cannot represent the potential relationships beneficial for models, leading to suboptimal results. To this end, we propose a structure-adaptive graph contrastive learning framework to capture potential discriminative relationships. More specifically, a structure learning layer is first proposed for generating the adaptive structure with contrastive loss. Next, a denoising supervision mechanism is designed to perform supervised learning on the structure to promote structure learning, which introduces the pseudostructure through the clustering results and denoises the pseudostructure to provide more reliable supervised information. In this way, under the dual constraints of denoising supervision and contrastive learning, the optimal adaptive structure can be obtained to promote graph representation learning. Extensive experiments on several graph datasets demonstrate that our proposed method outperforms state-of-the-art approaches on various tasks.
Read full abstract