Display Content, Display Methods, and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey
Display Content, Display Methods, and Evaluation Methods of the HCI in Explainable Recommender Systems: A Survey
- Research Article
- 10.1145/3653984
- Dec 26, 2024
- ACM Transactions on Intelligent Systems and Technology
Recommender systems have become increasingly important in navigating the vast amount of information and options available in various domains. By tailoring and personalizing recommendations to user preferences and interests, these systems improve the user experience, efficiency, and satisfaction. With a growing demand for transparency and understanding of recommendation outputs, explainable recommender systems have gained growing attention in recent years. Additionally, as user reviews could be considered the rationales behind why the user likes (or dislikes) the products, generating informative and reliable reviews alongside recommendations has thus emerged as a research focus in explainable recommendation. However, the model-generated reviews might contain factually inconsistent contents (i.e., the hallucination issue), which would thus compromise the recommendation rationales. To address this issue, we propose a contrastive learning framework to improve the faithfulness and factuality in explainable recommendation in this article. We further develop different strategies of generating positive and negative examples for contrastive learning, such as back-translation or synonym substitution for positive examples, and editing positive examples or utilizing model-generated texts for negative examples. Our proposed method optimizes the model to distinguish faithful explanations (i.e., positive examples) and unfaithful ones with factual errors (i.e., negative examples), which thus drives the model to generate faithful reviews as explanations while avoiding inconsistent contents. Extensive experiments and analysis on three benchmark datasets show that our proposed model outperforms other review generation baselines in faithfulness and factuality. In addition, the proposed contrastive learning component could be easily incorporated into other explainable recommender systems in a plug-and-play manner.
- Research Article
89
- 10.1016/j.cose.2022.102746
- Apr 27, 2022
- Computers & Security
Latest trends of security and privacy in recommender systems: A comprehensive review and future perspectives
- Research Article
52
- 10.1016/j.knosys.2022.108954
- May 8, 2022
- Knowledge-Based Systems
A survey for trust-aware recommender systems: A deep learning perspective
- Book Chapter
- 10.1007/978-3-031-60343-3_10
- Sep 20, 2024
In the producer-customer interaction, humanistic management places a strong emphasis on the end-user interests. This perspective embraces a number of more focused philosophies that highly value human development, potential, and dignity. A reaction to the emerging mega-trend that calls for reevaluating marketing is humanistic marketing. Recently, 5.0 marketing management was created by integrating conventional theories of consumer behavior with fundamental concepts from humanistic psychology, such as the ability for self-actualization, self-direction, and choice. Currently, research on online consumer behavior examines how customers select products from e-commerce platforms, and recommendation engines are crucial to this process. A sort of information filtering system called a recommender system makes suggestions for products or services based on the user’s areas of greatest interest. The present cross-sectional research will further investigate how the main types of recommender systems—social-aware recommender systems, robust recommender systems, and explainable recommender systems—are perceived by individuals depending on three psychological characteristics: trust, suspiciousness, and fast and slow thinking decision-making system. A sequential mediation analysis was employed, and a significant indirect effect was observed, results indicating the impact of anchoring effect. Implications are discussed with regard to an efficient 5.0 marketing management strategy.
- Research Article
4
- 10.3390/info16040282
- Mar 30, 2025
- Information
Recommender systems have evolved significantly in recent years, using advanced techniques such as explainable artificial intelligence, reinforcement learning, and graph neural networks to enhance both efficiency and transparency. This study presents a novel framework, XR2K2G (X for explainability, first R for recommender systems, the second R for reinforcement learning, first K for knowledge graph, the second K stands for knowledge distillation, and G for graph-based techniques), with the goal of developing a next-generation recommender system with a focus on careers empowerment. To optimize recommendations while ensuring sustainability and transparency, the proposed method integrates reinforcement learning with graph-based representations of career trajectories. Additionally, it incorporates knowledge distillation techniques to further refine the model’s performance by transferring knowledge from a larger model to a more efficient one. Our approach employs reinforcement learning algorithms, graph embeddings, and knowledge distillation to enhance recommendations by providing clear and comprehensible explanations for the recommendations. In this work, we discuss the technical foundations of the framework, deployment strategies, and its practical applicability in real-world career scenarios. The effectiveness and interpretability of our approach are demonstrated through experimental results.
- Research Article
14
- 10.3390/info14070401
- Jul 14, 2023
- Information
Significant attention has been paid to enhancing recommender systems (RS) with explanation facilities to help users make informed decisions and increase trust in and satisfaction with an RS. Justification and transparency represent two crucial goals in explainable recommendations. Different from transparency, which faithfully exposes the reasoning behind the recommendation mechanism, justification conveys a conceptual model that may differ from that of the underlying algorithm. An explanation is an answer to a question. In explainable recommendation, a user would want to ask questions (referred to as intelligibility types) to understand the results given by an RS. In this paper, we identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency. We followed the Human-Centered Design (HCD) approach and leveraged the What–Why–How visualization framework to systematically design and implement Why and How visual explanations in the transparent Recommendation and Interest Modeling Application (RIMA). Furthermore, we conducted a qualitative user study (N = 12) based on a thematic analysis of think-aloud sessions and semi-structured interviews with students and researchers to investigate the potential effects of providing Why and How explanations together in an explainable RS on users’ perceptions regarding transparency, trust, and satisfaction. Our study shows qualitative evidence confirming that the choice of the explanation intelligibility types depends on the explanation goal and user type.
- Research Article
3
- 10.3389/fdata.2024.1505284
- Jan 27, 2025
- Frontiers in big data
The rise of Large Language Models (LLMs), such as LLaMA and ChatGPT, has opened new opportunities for enhancing recommender systems through improved explainability. This paper provides a systematic literature review focused on leveraging LLMs to generate explanations for recommendations-a critical aspect for fostering transparency and user trust. We conducted a comprehensive search within the ACM Guide to Computing Literature, covering publications from the launch of ChatGPT (November 2022) to the present (November 2024). Our search yielded 232 articles, but after applying inclusion criteria, only six were identified as directly addressing the use of LLMs in explaining recommendations. This scarcity highlights that, despite the rise of LLMs, their application in explainable recommender systems is still in an early stage. We analyze these select studies to understand current methodologies, identify challenges, and suggest directions for future research. Our findings underscore the potential of LLMs improving explanations of recommender systems and encourage the development of more transparent and user-centric recommendation explanation solutions.
- Conference Article
21
- 10.1145/3340531.3411919
- Oct 19, 2020
Recommender systems play a fundamental role in web applications in filtering massive information and matching user interests. While many efforts have been devoted to developing more effective models in various scenarios, the exploration on the explainability of recommender systems is running behind. Explanations could help improve user experience and discover system defects. In this paper, after formally introducing the elements that are related to model explainability, we propose a novel explainable recommendation model through improving the transparency of the representation learning process. Specifically, to overcome the representation entangling problem in traditional models, we revise traditional graph convolution to discriminate information from different layers. Also, each representation vector is factorized into several segments, where each segment relates to one semantic aspect in data. Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge. In this way, the proposed model can learn interpretable and meaningful representations for users and items. Unlike traditional methods that need to make a trade-off between explainability and effectiveness, the performance of our proposed explainable model is not negatively affected after considering explainability. Finally, comprehensive experiments are conducted to validate the performance of our model as well as explanation faithfulness.
- Research Article
- 10.29130/dubited.1667105
- Jul 31, 2025
- Düzce Üniversitesi Bilim ve Teknoloji Dergisi
Popularity bias is a prevalent issue in recommendation systems, where popular items dominate recommendation lists, leading to reduced diversity and fairness. Traditional methods evaluate popularity bias based on overall item frequency, disregarding individual user tendencies. This study introduces a novel post-processing ranking method called Dynamic User Tendency Re-ranking (DUTR) to mitigate popularity bias in multi-criteria recommendation systems by incorporating user-specific preferences. DUTR leverages SHAP (SHapley Additive exPlanations) analysis to determine the influence of different criteria on user decision-making. Unlike conventional methods, which classify item popularity based on general trends, DUTR dynamically assesses each user's priority preferences. It then classifies items as popular or less popular based on individual preference patterns. This approach ensures that recommendation lists align more closely with user-specific interests while maintaining a balance between popular and less popular items. To validate the effectiveness of DUTR, extensive experiments were conducted on the YM10 and YM20 datasets. The results show that DUTR significantly reduces popularity bias while improving diversity and fairness in recommendations. Moreover, the integration of SHAP values enhances the explainability of the recommendation process, providing users with personalized and transparent suggestions. In conclusion, comparative analysis with existing techniques demonstrates that DUTR outperforms traditional methods in balancing popularity and personalization.
- Book Chapter
- 10.3233/faia230530
- Sep 28, 2023
Modern recommender systems utilize users’ historical behaviors to generate personalized recommendations. However, these systems often lack user controllability, leading to diminished user satisfaction and trust in the systems. Acknowledging the recent advancements in explainable recommender systems that enhance users’ understanding of recommendation mechanisms, we propose leveraging these advancements to improve user controllability. In this paper, we present a user-controllable recommender system that seamlessly integrates explainability and controllability within a unified framework. By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system by interacting with these explanations. Furthermore, we introduce and assess two attributes of controllability in recommendation systems: the complexity of controllability and the accuracy of controllability. Experimental evaluations on MovieLens and Yelp datasets substantiate the effectiveness of our proposed framework. Additionally, our experiments demonstrate that offering users control options can potentially enhance recommendation accuracy in the future. Source code and data are available at https://github.com/chrisjtan/ucr.
- Conference Article
21
- 10.1145/3503252.3531304
- Jul 4, 2022
Despite the acknowledgment that the perception of explanations may vary considerably between end-users, explainable recommender systems (RS) have traditionally followed a one-size-fits-all model, whereby the same explanation level of detail is provided to each user, without taking into consideration individual user's context, i.e., goals and personal characteristics. To fill this research gap, we aim in this paper at a shift from a one-size-fits-all to a personalized approach to explainable recommendation by giving users agency in deciding which explanation they would like to see. We developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations of the recommendations, with three levels of detail (basic, intermediate, advanced) to meet the demands of different types of end-users. We conducted a within-subject study (N=31) to investigate the relationship between user's personal characteristics and the explanation level of detail, and the effects of these two variables on the perception of the explainable RS with regard to different explanation goals. Our results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation goal and user type. Consequently, we suggested some theoretical and design guidelines to support the systematic design of explanatory interfaces in RS tailored to the user's context.
- Research Article
- 10.7717/peerj-cs.3595
- Feb 6, 2026
- PeerJ Computer Science
Recommender systems (RSs), which provide recommendations tailored to user preferences, are valuable in managing overloaded information. Traditional recommendation systems usually function as black-box models and lack explanation; as a result, user trust and system transparency are adversely affected. Explainable RSs (XRSs) aim to overcome this issue by providing interpretable justifications for recommendations. Previous XRS studies suffer from limited integration of user reviews with knowledge graphs (KGs), resulting in incomplete user preference modeling and lack of interpretability. Although improvements in XRSs have been achieved worldwide, studies on Arabic RSs a lack advanced tools and explanation methods, such as KGs, because of resource limitations, the challenges posed by the Arabic language, and its different dialects. This study introduces Shareh, an explainable KG-based Arabic recommender that utilizes meta-path-guided reasoning and graph attention networks to fuse user reviews with a heterogeneous Arabic KG. Experimental results on the Books Reviews Arabic Dataset show that Shareh improves mean absolute error (MAE), and root mean squared error (RMSE) by approximately 30% compared to baseline models. The system, which has a model fidelity score of 99.76%, effectively backs nearly all recommendations with reasonable explanations. Such high fidelity indicates that the produced explanations accurately reflect the fundamental concepts of the model, thereby improving the system’s interpretability and reliability and increasing user trust.
- Research Article
1
- 10.1109/tkde.2022.3226189
- Jan 1, 2022
- IEEE Transactions on Knowledge and Data Engineering
There is a critical issue in explainable recommender systems that compounds the challenges of explainability yet is rarely tackled: the lack of ground-truth explanation texts for training. It is unrealistic to expect every user-item pair in a dataset to have a corresponding target explanation. Hence, we pioneer the first non-supervised explainability architecture for review-based collaborative filtering (called NEAR) as our novel contribution to the theory of explanation construction in recommender systems. While maintaining excellent recommendation performance, our approach reformulates explainability as a non-supervised (i.e., unsupervised and self-supervised) explanation generation task. We formally define two explanation types, both of which NEAR can produce. An invariant explanation, fixed for all users, is based on the unsupervised extractive summary of an item's reviews via embedding clustering. Meanwhile, a variant explanation, personalized for a specific user, is a sentence-level text generated by our customized Transformer conditioned on every user-item-rating tuple and artificial ground-truth (self-supervised label) from one of the invariant explanation's sentences. Our empirical evaluation illustrates that NEAR's rating prediction accuracy is better than the other state-of-the-art baselines. Moreover, experiments and assessments show that NEAR-generated variant explanations are more personalized and distinct than those from other Transformer-based models, and our invariant explanations are preferred over those from other contemporary models in real life.
- Research Article
8
- 10.1080/10447318.2023.2262797
- Oct 13, 2023
- International Journal of Human–Computer Interaction
Explainable recommender systems (RS) have traditionally followed a one-size-fits-all approach, delivering the same explanation level of detail to each user, without considering their individual needs and goals. Further, explanations in RS have so far been presented mostly in a static and non-interactive manner. To fill these research gaps, we aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences. We followed a user-centered approach to design interactive explanations with three levels of detail (basic, intermediate, and advanced) and implemented them in the transparent Recommendation and Interest Modeling Application (RIMA). We conducted a qualitative user study (N = 14) to investigate the impact of providing interactive explanations with varying level of details on the users’ perception of the explainable RS. Our study showed qualitative evidence that fostering interaction and giving users control in deciding which explanation they would like to see can meet the demands of users with different needs, preferences, and goals, and consequently can have positive effects on different crucial aspects in explainable recommendation, including transparency, trust, satisfaction, and user experience.
- Conference Article
29
- 10.1145/3404835.3463248
- Jul 11, 2021
Recently, research on explainable recommender systems has drawn much attention from both academia and industry, resulting in a variety of explainable models. As a consequence, their evaluation approaches vary from model to model, which makes it quite difficult to compare the explainability of different models. To achieve a standard way of evaluating recommendation explanations, we provide three benchmark datasets for EXplanaTion RAnking (denoted as EXTRA), on which explainability can be measured by ranking-oriented metrics. Constructing such datasets, however, poses great challenges. First, user-item-explanation triplet interactions are rare in existing recommender systems, so how to find alternatives becomes a challenge. Our solution is to identify nearly identical sentences from user reviews. This idea then leads to the second challenge, i.e., how to efficiently categorize the sentences in a dataset into different groups, since it has quadratic runtime complexity to estimate the similarity between any two sentences. To mitigate this issue, we provide a more efficient method based on Locality Sensitive Hashing (LSH) that can detect near-duplicates in sub-linear time for a given query. Moreover, we make our code publicly available to allow researchers in the community to create their own datasets.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.