Abstract

Current developments in Artificial Intelligence (AI) led to a resurgence of Explainable AI (XAI). New methods are being researched to obtain information from AI systems in order to generate explanations for their output. However, there is an overall lack of valid and reliable evaluations of the effects on users' experience of, and behavior in response to explanations. New XAI methods are often based on an intuitive notion what an effective explanation should be. Rule- and example-based contrastive explanations are two exemplary explanation styles. In this study we evaluate the effects of these two explanation styles on system understanding, persuasive power and task performance in the context of decision support in diabetes self-management. Furthermore, we provide three sets of recommendations based on our experience designing this evaluation to help improve future evaluations. Our results show that rule-based explanations have a small positive effect on system understanding, whereas both rule- and example-based explanations seem to persuade users in following the advice even when incorrect. Neither explanation improves task performance compared to no explanation. This can be explained by the fact that both explanation styles only provide details relevant for a single decision, not the underlying rational or causality. These results show the importance of user evaluations in assessing the current assumptions and intuitions on effective explanations.

Highlights

  • Humans expect others to comprehensibly explain decisions that have an impact on them [1]

  • A lack of user evaluations characterizes the field of Explainable Artificial Intelligence (XAI)

  • A contribution of this paper was to provide a set of recommendations for future user evaluations

Read more

Summary

Introduction

Humans expect others to comprehensibly explain decisions that have an impact on them [1]. The former section illustrates the shortcomings of current user evaluations, formed by either a lack of validity and reliability or the entire omission of an evaluation The latter discusses the two explanation styles used in our evaluation in more detail, and illustrates their prevalence in the field of XAI. A major goal of Explainable Artificial Intelligence (XAI) is to have AI-systems construct explanations for their own output Common purposes of these explanations are to increase system understanding [12], improve behavior predictability [13] and calibrate system trust [14,15,8]. The exact purpose of explanation methods is often not defined or formalized, even though these different purposes may result in profoundly different requirements for explanations [18] This makes it difficult for the field of XAI to progress and to evaluate developed methods

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call