Recently, contrastive learning has been shown to be effective in improving sequential recommendation, alleviating noise and long-tail issues caused by real-world noise and data sparsity, and enabling models to learn high-quality user representations. The data augmentation approach of contrastive learning mainly includes randomly changing the sequence data and adding randomly perturbations, and by pulling the distance between the original sample and the positive sample and pushing the distance between the original sample and the negative sample to enhance alignment and to optimize the uniformity of representation space. However, these approaches may use false positive/negative samples, learn biased user representations, and compromise the uniformity of the representation space, leading to second-best recommendation results. To solve these issues, we propose a generic Simple Debiased Contrastive Learning for Sequential Recommendation framework (SimDCL), to alleviate these biased samples, optimize the uniformity of the representation space, and enhance anti-noise ability. Specifically, we augment the samples based on noise, and optimize the uniformity of the representation space through gradient algorithms, and design an object filtering approach to punish false positive and false negative samples in user interaction sequences. Finally, we combine contrastive learning and sequential recommendation for multi-task collaborative training to improve the model’s anti-noise capability and the quality of recommendation. The empirical studies on four benchmark datasets prove the superiority of our approach, and the code and datasets are available at URL.
Read full abstract