Abstract
Self-supervised graph-level representation learning aims to learn discriminative representations for subgraphs or entire graphs without human-curated labels. Recently, graph contrastive learning (GCL) methods have revolutionized this field and achieved state-of-the-art results in various downstream tasks. Nonetheless, current GCL models are mostly based on simple node-level information aggregation operations and fail to reveal various substructures from input graphs. Moreover, to perform graph-graph contrastive training, they often involve well-designed graph augmentation, which is expensive and requires extensive expert efforts. Here, we propose a novel GCL framework, namely DualGCL, for self-supervised graph-level representation learning. For fine-grained local information incorporation, we first present an adaptive hierarchical aggregation process with a differentiable Transformer-based aggregator. Then, to efficiently learn graph-level discriminative representations, we introduce a dual-channel contrastive learning process in a multi-granularity and augmentation-free contrasting mode. When tested empirically on six popular graph classification benchmarks, our DualGCL achieves better or comparable performance than various strong baselines.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.