Abstract

Adoption and use of misspecified models can lead to impoverished decision-making—a phenomenon we term model blindness. A series of two experiments investigated the consequences of model blindness on human decision-making and performance and how those consequences can be mitigated via an explainable AI (XAI) intervention. The experiments implemented a simulated route recommender system as a Decision Support System (DSS) with a true data-generating model. In Experiment 1, the true model generating the recommended routes was misspecified at two different levels to impose model blindness on users. In Experiment 2, the same route-recommender system was augmented with a mitigation technique to overcome the impact of model-misspecifications on decision-making. Overall, the results of both experiments provided little support for performance degradation. The participants' decision strategies revealed that they could understand model limitations from feedback and explanations and could adapt their strategy to account for those misspecifications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call