Abstract

Graph representation is an important part of graph clustering. Recently, contrastive learning, which maximizes the mutual information between augmented graph views that share the same semantics, has become a popular and powerful paradigm for graph representation. However, in the process of patch contrasting, existing literature tends to learn all features into similar variables, i.e., representation collapse, leading to less discriminative graph representations. To tackle this problem, we propose a novel self-supervised learning method called dual contrastive learning network (DCLN), which aims to reduce the redundant information of learned latent variables in a dual manner. Specifically, the dual curriculum contrastive module (DCCM) is proposed, which approximates the node similarity matrix and feature similarity matrix to a high-order adjacency matrix and an identity matrix, respectively. By doing this, the informative information in high-order neighbors could be well collected and preserved while the irrelevant redundant features among representations could be eliminated, hence improving the discriminative capacity of the graph representation. Moreover, to alleviate the problem of sample imbalance during the contrastive process, we design a curriculum learning strategy, which enables the network to simultaneously learn reliable information from two levels. Extensive experiments on six benchmark datasets have demonstrated the effectiveness and superiority of the proposed algorithm compared with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call