Abstract

Explainable Artificial Intelligence (XAI) enables Artificial Intelligence (AI) to explain its decisions. This holds the promise of making AI more understandable to users, improving interaction, and establishing an adequate level of trust. We tested this claim in the high-risk task of AI-assisted mushroom hunting, where people had to decide whether a mushroom was edible or poisonous. In a between-subjects experiment, 328 visitors of an Austrian media art festival played a tablet-based mushroom hunting game while walking through a highly immersive artificial indoor forest. As part of the game, an artificially intelligent app analyzed photos of the mushrooms they found and recommended classifications. One group saw the AI’s decisions only, while a second group additionally received attribution-based and example-based visual explanations of the AI’s recommendation. The results show that participants with visual explanations outperformed participants without explanations in correct edibility assessments and pick-up decisions. This exhibition-based experiment thus replicated the decision-making results of a previous online study. However, unlike in the previous study, the visual explanations did not significantly affect levels of trust or acceptance measures. In a direct comparison, we consequently discuss the findings in terms of generalizability. Besides the scientific contribution, we discuss the direct impact of conducting XAI experiments in immersive art- and game-based environments in exhibition contexts on visitors and local communities by triggering reflection and awareness for psychological issues of human–AI interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call