Abstract

Conversational recommender systems (CRSs) utilize natural language interactions and dialog history to infer user preferences and provide accurate recommendations. Due to the limited conversation context and background knowledge, existing CRSs rely on external sources such as knowledge graphs (KGs) to enrich the context and model entities based on their interrelations. However, these methods ignore the rich intrinsic information within entities. To address this, we introduce the knowledge-enhanced entity representation learning (KERL) framework, which leverages both the KG and a pretrained language model (PLM) to improve the semantic understanding of entities for CRS. In our KERL framework, entity textual descriptions are encoded via a PLM, while a KG helps reinforce the representation of these entities. We also employ positional encoding to effectively capture the temporal information of entities in a conversation. The enhanced entity representation is then used to develop a recommender component that fuses both entity and contextual representations for more informed recommendations, as well as a dialog component that generates informative entity-related information in the response text. A high-quality KG with aligned entity descriptions is constructed to facilitate this study, namely, the Wiki Movie Knowledge Graph (WikiMKG). The experimental results show that KERL achieves state-of-the-art results in both recommendation and response generation tasks. Our code is publicly available at the link: https://github.com/icedpanda/KERL.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call