Abstract
Explainable artificial intelligence (XAI) is proposed to improve transparency and performance by providing information about AI’s limitations. Specifically, XAI could support appropriate behavior in cases where AI errors occur due to less training data. These error-prone cases might be salient (pop-out) because of their naturally rarer occurrence. The current study investigated how this pop-out effect influences explainability’s effectiveness on trust and dependence. In an online experiment, participants ( N = 128) estimated the contamination degree of bacterial stimuli. The lower occurrence of error-prone stimuli was indicated by one of two colors. Participants either knew about the error-prone color (XAI) or not (nonXAI). Contrary to earlier research without salient error-prone trials, explainability did not help participants follow correct recommendations in non-error-prone trials but helped them correct AI’s errors in error-prone trials. However, explainability still led to over-correction in correct error-prone trials. This poses the challenge of implementing explainability while mitigating its negative effects.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have