Abstract

With advancements in artificial intelligence (AI), explainable AI (XAI) has emerged as a promising tool for enhancing the explainability of complex machine learning models. However, the explanations generated by an XAI may lead to cognitive biases among human users. To address this problem, this study aims to investigate how to mitigate users’ cognitive biases based on their individual characteristics. In the literature review, we found two factors that can be helpful in remedying biases: 1) debiasing strategies that have been reported to potentially reduce biases in users’ decision-making via additional information or change in information delivery, and 2) explanation modality types. To examine these factors’ effects, we conducted an experiment with a 4 (debiasing strategy) × 3 (explanation type) between-subject design. In the experiment, participants were exposed to an explainable interface that provides an AI’s outcomes with explanatory information, and their behavioral and attitudinal responses were collected. Specifically, we statistically examined the effects of textual and visual explanations on users’ trust and confirmation bias toward AI systems, considering the moderating effects of debiasing methods and watching time. The results demonstrated that textual explanations lead to higher trust in XAI systems compared to visual explanations. Moreover, we found that textual explanations are particularly beneficial for quick decision-makers to evaluate the outputs of AI systems. Next, the results indicated that the cognitive bias can be effectively mitigated by providing users with a priori information. These findings have theoretical and practical implications for designing AI-based decision support systems that can generate more trustworthy and equitable explanations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call