Abstract

Multi-behavior recommendation (MBR) aims to improve the prediction of the target behavior (i.e., purchase) by exploiting multi-typed auxiliary behaviors, such as page view, cart and favorite. Recently, leveraging Graph Neural Networks (GNNs) to capture collaborative signals has been the mainstream paradigm for MBR. However, GNN-based MBR suffers from data sparsity in real-world scenarios and thus performs mediocrely. Excitingly, contrastive learning which can mine additional self-supervised signals from raw data, holds great potential to alleviate this problem. Naturally, we seek to exploit contrastive learning to enhance MBR, while two key challenges have yet to be addressed: (i) Difficult to learn reliable representations under different behaviors; (ii) Sparse supervised signals under target behavior. To tackle the above challenges, in this paper, we propose a novel Dual-Scale Contrastive Learning (DSCL) framework. Unlike traditional contrastive learning methods that artificially construct two views through data augmentation, we comprehensively consider two different views for MBR, including the collaborative view and the semantic view. Specifically, we regard the user–item graph as a collaborative view and the user–user graph as a semantic view. In particular, we develop two novel contrastive learning objectives at two scales. For the first challenge, we devise local-to-context contrastive learning within behaviors on collaborative view, which enhances the representation learning by incorporating potential neighbors into the contrastive learning from the graph topological space and the semantic space, respectively. As for the second challenge, we design local-to-local contrastive learning across behaviors on a semantic view, which has the benefit of capturing commonalities between different behaviors and integrating them into the target behavior to alleviate the sparse supervised signal problem of the target behavior. In addition, we also propose an adaptive weight network to efficiently customize the integration of all losses. Extensive experiments on three real-world benchmark datasets show that our proposed DSCL is significantly superior to various state-of-the-art recommendation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call