Abstract

Learning effective graph representations in an unsupervised manner is a popular research topic in graph data analysis. Recently, contrastive learning has shown its success in unsupervised graph representation learning. However, how to avoid collapsing solutions for contrastive learning methods remains a critical challenge. In this paper, a simple method is proposed to solve this problem for graph representation learning, which is different from existing commonly used techniques (such as negative samples or predictor network). The proposed model mainly relies on an asymmetric design that consists of two graph neural networks (GNNs) with unequal depth layers to learn node representations from two augmented views and defines contrastive loss only based on positive sample pairs. The simple method has lower computational and memory complexity than existing methods. Furthermore, a theoretical analysis proves that the asymmetric design avoids collapsing solutions when training together with a stop-gradient operation. Our method is compared to nine state-of-the-art methods on six real-world datasets to demonstrate its validity and superiority. The ablation experiments further validated the essential role of the asymmetric architecture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call