Abstract

Explainable recommendations aim to generate personalized explanations for suggested items, which are provided based on the historical interactions (e.g., ratings) between users and items. Review contents are often taken as a proxy of explanations. However, most review-based models presume a sentiment consistency between user ratings and review contents, ignoring their inconsistency in real applications. By analyzing three real datasets, we observe that a user may share a positive (negative) opinion to an item in terms of rating value but a negative (positive) sentiment in terms of review content, and such contradicting scenario takes over 40% of all cases in general. To resolve this issue, in this paper we propose a novel explainable recommendation model called PESI, which can generate accurate Personalized Explanations recommendation with the involvement of Sentiment Inconsistency between ratings and reviews. Specifically, PESI consists of three modules: rating prediction, explanation generation, and a novel rating-review inconsistency extraction. The inconsistency extraction module disentangles ratings and reviews, effectively distinguishing both shared and private features, and ensuring accurate disentanglement through contrastive learning objectives. Then, the extracted inconsistent features are injected into the explanation generation module to provide more personalized and higher-quality explanations. The experimental results on the three datasets show that PESI consistently outperforms other competing methods in terms of explanation quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call