Bundle recommendation can recommend a collection of associated items that can be consumed together to a user rather than recommending these items separately, making it extremely suitable for some scenarios such as product bundle recommendation and game bundle recommendation. Recent bundle recommendation approaches consider auxiliary data to mitigate sparse user-bundle interactions. However, these approaches obtain the node embeddings directly from the established user-bundle graph and do not explicitly exploit the relationships between users (bundles) when constructing recommendation models. Moreover, bundle recommendation approaches based on graph contrastive learning usually construct contrastive views by randomly discarding nodes (edges) in the graph, while discarding some essential nodes or edges will destroy the structure of the original graph, thereby deteriorating the quality of the learned node embeddings. Aiming at these limitations, we propose a bundle recommendation approach based on multi-view graph contrastive representation learning. First, we present a multi-view modeling method to model the relations between entities as several views from different perspectives. These views serve as inputs of graph neural networks for graph representation learning and provide contrastive views for the contrastive learning tasks. Second, we propose a novel framework for bundle recommendation. This framework obtains the user (bundle) embeddings from different views by performing multi-view graph representation learning and enhances the learned user and bundle embeddings through a two-level contrastive learning strategy. On this basis, the enhanced user (bundle) embeddings are fused for prediction. Finally, we design a joint optimization objective to optimize the model parameters, combining the prediction loss that supports multiple negative samples and the contrastive losses. Experiments on the Netease and Youshu datasets reveal that our approach outperforms the state-of-the-art (SOTA) baselines. Furthermore, the average improvements of Recall@K and NDCG@K of our approach over the SOTA baselines are approximately 3.38% and 2.80% on Netease and 3.94% and 4.84% on Youshu.
Read full abstract