Abstract

Recommendation systems have been effectively utilized in various fields, but their internal decision-making methods are still largely unknown. This opaque decision-making method can greatly affect users’ trust in the recommendation system. Therefore, finding a way to explain the reasons for model decisions has become an urgent task. Previous studies often used LSTM and other models to generate recommendation explanations and explain the reasons for recommendations in text form. However, traditional methods cannot effectively use the ID information of users and items, and the text generated is highly repetitive. To solve this problem, this paper uses the method of prompt learning combined with a graph encoder to design a recommendation explanation generation model. In order to narrow the semantic gap between the ID information of users and items and natural language and capture high-level interaction information, this paper designs a graph encoder based on user similarity to learn the interactive semantic information of user and item IDs, and to construct a continuous prompt. Then, the discrete prompt composed of discrete features of users and items is combined with the continuous prompt to construct a hybrid prompt to input into the pre-trained model to generate the recommended explanation. This paper experiments on three publicly available datasets and compares them with several state-of-the-art methods to demonstrate the personalization and text quality of the generated explanations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call