Abstract

The purpose of this study is to examine the impact of an example-based explainable artificial intelligence (XAI) interface on trust, understanding, and performance in highly-technical populations. XAI studies often focus on general users in low-risk domains. This study examined the impact of showing the closest matches from the training data from two classes on trust, understanding, and performance for highly-technical users in a high-risk domain. We found that providing example-based explanations significantly increased trust and understanding without decreasing performance. Showing the most similar examples from two classes increased trust more than showing examples from only one class. Participants did not treat different classes the same. The most important features for predicting how well an interface was understood were the helpfulness of the provided examples and the person's trust in the human-machine team. We found priming of highly-technical participants to be particularly important for running XAI studies to mitigate the fear of their jobs being impacted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call