Abstract

Knowledge graph, with its rich edge information, has demonstrated its superiority in improving interpretability and alleviating the cold start problem, and has been widely applied in recommendation systems. Currently, graph neural networks (GNNs)-based methods have become mainstream in knowledge-aware recommendation (KGR) due to their unique ability to capture cross-order structural information. However, because of the natural sparsity of user and item interaction data, GNNs-based knowledge-aware recommendation methods generally face the sparse supervised signals, which limits their performance in applications. In this paper, we propose a new model named Attribute Mining Multi-view Contrastive Learning Network (AMMCN) to address this challenge. Firstly, AMMCN enriches the originally sparse embedding representation by mining the potential information of native data and constructing four different views. Unlike previous works that generate new views through simple destruction and dropout or build contrastive views by guiding user–item graph enhancement with knowledge graph, we holistically integrate both the global collaborative knowledge graph and the local-level user–item graphs along with their associated knowledge graphs. We also adopt the intersection over union (IoU) method to mine the item attribute information on each original view, so as to generate a new contrastive view - item–item graph. On this basis, AMMCN conducts cross-view contrastive learning at the local and global-levels, integrating the collaborative information of each view and the global structural information in a self supervised manner, overcoming dependence on supervised signals. In addition, in the item–item graph, we designed a top-k similarity matching mechanism to capture collaboration information that was overlooked in previous work and minimize unnecessary noise. Adequate experiments on three public benchmark datasets show that AMMCN achieves considerable performance enhancements in comparison to current methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call