Abstract

Conversational recommender system (CRS) enhances the recommender system by acquiring the latest user preference through dialogues, where an agent needs to decide “whether to ask or recommend”, “which attributes to ask”, and “which items to recommend” in each round. To explore these questions, reinforcement learning is adopted in most CRS frameworks. However, existing studies somewhat ignore to consider the connection between the previous rounds and the current round of the conversation, which might lead to the lack of prior knowledge and inaccurate decisions. In this view, we propose to facilitate the connections between different rounds of conversations in a dialogue session through deep transformer-based multi-channel meta-reinforcement learning, so that the CRS agent can decide each action/decision based on previous states, actions, and their rewards. Besides, to better utilize a user’s historical preferences, we propose a more dynamic and personalized graph structure to support the conversation module and the recommendation module. Experiment results on five real-world datasets and an online evaluation with real users in an industrial environment validate the improvement of our method over the state-of-the-art approaches and the effectiveness of our designs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call