Abstract

Collaborative filtering (CF) is widely used to learn informative latent representations of users and items from observed interactions. Existing CF-based methods commonly adopt negative sampling to discriminate different items. That is, observed user-item pairs are treated as positive instances; unobserved pairs are considered as negative instances and are sampled under a defined distribution for training. Training with negative sampling on large datasets is computationally expensive. Further, negative items should be carefully sampled under the defined distribution, in order to avoid selecting an observed positive item in the training dataset. Unavoidably, some negative items sampled from the training dataset could be positive in the test set. Recently, self-supervised learning (SSL) , has emerged as a powerful tool to learn a model without negative samples. In this paper, we propose a self-supervised collaborative filtering framework (SelfCF) , that is specially designed for recommender scenario with implicit feedback. The proposed SelfCF framework simplifies Siamese networks and can be easily applied to existing deep-learning based CF models, which we refer to as backbone networks. The main idea of SelfCF is to augment the latent embeddings generated by backbone networks instead of the raw input of user/item ids. We propose and study three embedding perturbation techniques that can be applied to different types of backbone networks including both traditional CF models and graph-based models. The framework enables learning informative representations of users and items without negative samples, and is agnostic to the encapsulated backbones. We conduct experimental comparisons on four datasets, one self-supervised framework, and eight baselines to show that our framework may achieve even better recommendation accuracy than the encapsulated supervised counterpart with a 2×–4× faster training speed. The results also demonstrate that SelfCF can boost up the accuracy of a self-supervised framework BUIR by 17.79% on average and shows competitive performance with baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call