Abstract

AbstractRepresentation learning improves the accuracy of recommendation through mining high-order neighbors information on the user-item graph, but still suffers the data sparsity and cold start problems. Although knowledge-aware representation learning can alleviate the above problems to some certain extent by using knowledge graphs to capture rich addition information of the items, this method relies heavily on a large amount of training data and annotations in a supervised learning manner. Self-supervised learning has been proved to be a good substitute because of its ability to avoid the cost of annotating large-scale datasets.In this paper, we explore the self-supervised contrastive learning on the hybrid structure of the knowledge graph and user-item graph to solve above problems. We design a Knowledge-aware Self-supervised Graph Contrastive Learning model called KSGL. The core idea is to learn the representation of users and items effectively by pulling the augmented versions of the same users/items close to each other while pushing away different ones. Specifically, KSGL first performs data enhancement on the input hybrid graph, generates multiple views of the target node, and then refines the node embedding in each view through Graph Convolutional Networks (GCN), and finally updates the model by contrast loss. We conduct experiments on three benchmark datasets to demonstrate the effectiveness of KSGL and the results of the experiment show that our model can not only improve recommendation accuracy but also obtain robustness against interactive noise.KeywordsSelf-superviesd learningKnowledge graphGraph convolutional networksRecommendation

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call