Recently, Graph Contrastive Learning (GCL) has seen rapid growth in recommender systems. GCL-based Collaborative Filtering (CF) methods typically integrate the primary recommendation task with auxiliary CL tasks, which benefit from two key properties: alignment and uniformity. However, these methods often rely on graph augmentation and negative sampling, which are time-consuming and risk learning incorrect signals. To remedy these issues, we propose a novel Negative-sampling-Free Graph Contrastive Learning (NFGCL) framework to achieve high-quality representations without negative sampling or graph augmentation. Specifically, we use in-batch positive user-item pairs to construct a preference matrix where each element represents cosine similarity between a user and an item. We introduce a novel contrastive objective to make the preference matrix close to an identity matrix, drawing positive pairs closer and distancing negative pairs to achieve better alignment while improving uniformity. Additionally, we propose a simple yet effective representation-level augmentation method that integrates the normalized first-layer Graph Convolutional Networks (GCN) output into the final embeddings, further enhancing alignment. We also employ the uniformity loss to regulate the uniformity of user/item representation distribution. Extensive experiments on three public datasets demonstrate that our method outperforms baseline methods and effectively balances alignment and uniformity.
Read full abstract