ABSTRACT AI is anticipated to improve human decision-making across various domains, often in high-stakes, difficult tasks. However, human reliance on AI recommendations is often inappropriate. A common approach to address this is to provide explanations about the AI output to decision makers, but results have been mixed so far. It often remains unclear when people can rely appropriately on AI and when explanations can help. In this work, we conducted a lab experiment (N = 34) to investigate how the appropriateness of human reliance on (explainable) AI depends on the mental workload induced by different decision difficulties. Instead of self-assessments, we used EEG (Emotiv Epoc Flex head cap, 32 wet electrodes) to more directly measure participants' mental workload. We found that the difficulty of a decision, indicated by the induced mental workload, strongly influences participants' ability to rely appropriately on AI, as assessed through relative self-reliance, relative AI reliance, and decision accuracy with and without AI. While reliance was appropriate for low mental workload decisions, participants were prone to overreliance in high mental workload decisions. Explanations had no significant effect in either case. Our results imply that alternatives to the common ‘recommend-and-explain’ approach should be explored to assist human decision-making in challenging tasks.
Read full abstract