Abstract

Recently, Graph Neural Networks (GNNs) using aggregating neighborhood collaborative information have shown effectiveness in recommendation. However, GNNs-based models suffer from over-smoothing and data sparsity problems. Due to its self-supervised nature, contrastive learning has gained considerable attention in the field of recommendation, aiming at alleviating highly sparse data. Graph contrastive learning models are widely used to learn the consistency of representations by constructing different graph augmentation views. Most current graph augmentation with random perturbation destroy the original graph structure information, which mislead embeddings learning. In this paper, an effective graph contrastive learning paradigm CollaGCL is proposed, which constructs graph augmentation by using singular value decomposition to preserve crucial structure information. CollaGCL enables perturbed views to effectively capture global collaborative information, mitigating the negative impact of graph structural perturbations. To optimize the contrastive learning task, the extracted meta-knowledge was propagate throughout the original graph to learn reliable embedding representations. The self-information learning between views enhances the semantic information of nodes, thus alleviating the problem of over-smoothing. Experimental results on three real-world datasets demonstrate the significant improvement of CollaGCL over state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call