Abstract

Recently, with the aim of enhancing the trustworthiness of recommender systems, explainable recommendation has attracted much attention from the research community. Intuitively, users’ opinions toward different aspects of an item determine their ratings (i.e., users’ preferences) for the item. Therefore, rating prediction from the perspective of opinions can realize personalized explanations at the level of item aspects and user preferences. However, there are several challenges in developing an opinion-based explainable recommendation: (1) The complicated relationship between users’ opinions and ratings. (2) The difficulty of predicting the potential (i.e., unseen) user-item opinions because of the sparsity of opinion information. To tackle these challenges, we propose an overall preference-aware opinion-based explainable rating prediction model by jointly modeling the multiple observations of user-item interaction (i.e., review, opinion, rating). To alleviate the sparsity problem and raise the effectiveness of opinion prediction, we further propose a triple dual learning-based framework with a novelly designed triple dual constraint . Finally, experiments on three popular datasets show the effectiveness and great explanation performance of our framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call