Recently, eXplainable AI (XAI) research has focused on the use of counterfactual explanations to address interpretability, algorithmic recourse, and bias in AI system decision-making. The developers of these algorithms claim they meet user requirements in generating counterfactual explanations with “plausible”, “actionable” or “causally important” features. However, few of these claims have been tested in controlled psychological studies. Hence, we know very little about which aspects of counterfactual explanations really help users understand the decisions of AI systems. Nor do we know whether counterfactual explanations are an advance on more traditional causal explanations that have a longer history in AI (e.g., in expert systems). Accordingly, we carried out three user studies to (i) test a fundamental distinction in feature-types, between categorical and continuous features, and (ii) compare the relative effectiveness of counterfactual and causal explanations. The studies used a simulated, automated decision-making app that determined safe driving limits after drinking alcohol, based on predicted blood alcohol content, where users’ responses were measured objectively (using predictive accuracy) and subjectively (using satisfaction and trust judgments). Study 1 (N = 127) showed that users understand explanations referring to categorical features more readily than those referring to continuous features. It also discovered a dissociation between objective and subjective measures: counterfactual explanations elicited higher accuracy than no-explanation controls but elicited no more accuracy than causal explanations, yet counterfactual explanations elicited greater satisfaction and trust than causal explanations. In Study 2 (N = 136) we transformed the continuous features of presented items to be categorical (i.e., binary) and found that these converted features led to highly accurate responding. Study 3 (N = 211) explicitly compared matched items involving either mixed features (i.e., a mix of categorical and continuous features) or categorical features (i.e., categorical and categorically-transformed continuous features), and found that users were more accurate when categorically-transformed features were used instead of continuous ones. It also replicated the dissociation between objective and subjective effects of explanations. The findings delineate important boundary conditions for current and future counterfactual explanation methods in XAI.
Read full abstract