Abstract

The current research of recommender system mainly focuses on improving the accuracy of the recommendation, but pays less attention to explainability. Explainability is essential to enhance users’ trust and satisfaction, which can even increase the likelihood of buying items. In the existing explainable recommender systems, the mining of explicit and implicit features is not comprehensive, and the interaction among these features is not considered plentifully. In addition, the generated recommended reason text is not personalized and rich enough. It is necessary to improve the quality of recommended reason text because it is difficult to meet the needs of different users by using low-quality text. In this paper, we propose a new method that fuses external knowledge and aspect sentiment to predict rating and generate personalized, content-rich recommended reasons, which applies fine-tuning BERT to solve aspect-based sentiment analysis and extends Transformer to generate recommended reason by using bi-directional attention. The experiment results on real-world datasets demonstrated that our method was effective, and our model was superior to the baseline models on various metrics. For rating prediction task, our model can achieve an improvement of 0.6% on average in terms of RMSE. For recommended reason generation task, our model can achieve an improvement of 9.2% to 11.3% over state-of-the-arts in terms of BLEU.

Highlights

  • With the rapid development of the Internet, people are enjoying the great convenience brought by the information era

  • Research on recommender system can generally be divided into collaborative filtering (CF) [1], content-based [2], and hybrid methods [3]

  • We propose a novel recommended reason generation model, which uses bi-directional attention mechanism to effectively fuse external knowledge and aspect sentiment

Read more

Summary

Introduction

With the rapid development of the Internet, people are enjoying the great convenience brought by the information era. They are facing the troubles caused by information overload. Many powerful neural network recommendation algorithms have been proposed [4]–[6]. These recommendation algorithms improved accuracy, but there were some deficiencies in explainability. Just because these recommender systems do not pay attention to explainability, the recommended reason text is not usually generated [7]. Some deep learning recommended models have good accuracy, but these models

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.