Abstract

Graph convolutional network (GCN)-based recommendation has recently attracted significant attention in the recommender system community. Although current studies propose various GCNs to improve recommendation performance, existing methods suffer from two main limitations. First, user–item interaction data is generally sparse in practice, highlighting these methods' ineffectiveness in learning user and item feature representations. Second, they usually perform a dot-product operation to model and calculate user preferences on items, leading to inaccurate user preference learning. To address these limitations, this study adopts a design idea that sharply differs from existing works. Specifically, we introduce the knowledge distillation concept into GCN-based recommendation and propose a two-phase knowledge distillation model (TKDM) improving recommendation performance. In Phase I, a self-distillation method on a graph auto-encoder learns the user and item feature representations. This auto-encoder employs a simple two-layer GCN as an encoder and a fully connected layer as a decoder. On this basis, in Phase II, a mutual-distillation method on a fully connected layer is introduced to learn user preferences on items with triple-based Bayesian personalized ranking. Extensive experiments on three real-world data sets demonstrate that TKDM outperforms classic and state-of-the-art methods related to GCN-based recommendation problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call