Abstract

ABSTRACT Although the roots of artificial intelligence (AI) stretch back some years, it currently flourishes in research and practice. However, AI deals with trust issues. One possible solution approach is making AI explain itself to its user, but it is still unclear how an AI can accomplish this in decision-making scenarios. This study focuses on how a user’s expertise influences trust in explainable AI (XAI) and how this influences behaviour. To test our theoretical assumptions, we develop an AI-based decision support system (DSS), observe user behaviour in an online experiment, complemented with survey data. The results show that domain-specific expertise negatively affects trust in AI-based DSS. We conclude that the focus on explanations might be overrated for users with low domain-specific expertise, whereas it is vital for users with high expertise. Investigating the influence of expertise on explanations of an AI-based DSS, this study contributes to research on XAI and DSS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call