Abstract

There is a renewed and growing demand for interpretable and explainable machine learning (ML) systems, propelled by the increased use of these systems for making high-stakes decisions affecting individuals. Despite having set theoretical ground for explainable intelligent systems a few decades go, the information system scholars have given little attention to new developments, and especially to the use of ML trained models in decision-making with humans-in-the-loop in real-world problems. In this paper, we take the sociotechnical system lenses and employ quantitative and qualitative analysis of a field intervention in a public employment service setting to study the machine learning informed decision-making with interpreted models' outputs. Contrary to theory, our results suggest a small positive effect of explanations on confidence in the final decision, and a negligible effect on the decisions' quality. We uncover complex dynamic interactions between humans and algorithms, and the interplay of algorithmic aversion, trust, experts’ heuristic, and changing uncertainty-resolving condititions. We discuss theoretical and practical implications of our findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call