Abstract

While Graph Contrastive Learning (GCL) has demonstrated satisfactory outcomes, three main problems remain. First, to effectively transmit the message of higher-order hop nodes, it is imperative to stack multiple graph convolution layers, which results in heightened computational demands and memory requirements. Second, the traditional GCL framework employs the pairwise comparison of positive and negative samples, which could slow model training and necessitate considerable computing resources. Third, the GCL paradigm places considerable emphasis on the data augmentation. Some augmentation techniques focus on altering the graph’s structure, which demand significant computational resources. In order to resolve the aforementioned problems in GCL, we incorporate the decoupled GCN into GCL paradigm and develop a new GCL framework termed Decoupled Group Discrimination (DGD), which relies on a one-layer Multi-Layer Perceptron (MLP) encoder and entirely abandons GCN. We also put forward a new negative sample generation strategy which is customized for DGD and enriches the graph structure information during training. Furthermore, we identify a more effective approach to aggregate information from individual nodes by a range of learnable parameters. Experiments on various datasets for the node classification reveal that DGD yields the superior accuracy, faster training time, and less memory consumption. Specifically, DGD improves the overall accuracy of the node classification by a margin of 1 to 2 percentages with the time required for the model training diminished by over 90% and the memory reduced exceeding 70%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call