Abstract

Artificial intelligence systems are increasingly demonstrating their capacity to make better predictions than human experts. Yet recent studies suggest that professionals sometimes doubt the quality of these systems and overrule machine-based prescriptions. This paper explores the extent to which a decision maker (DM) supervising a machine to make high-stakes decisions can properly assess whether the machine produces better recommendations. To that end, we study a setup in which a machine performs repeated decision tasks (e.g., whether to perform a biopsy) under the DM’s supervision. Because stakes are high, the DM primarily focuses on making the best choice for the task at hand. Nonetheless, as the DM observes the correctness of the machine’s prescriptions across tasks, the DM updates the DM’s belief about the machine. However, the DM is subject to a so-called verification bias such that the DM verifies the machine’s correctness and updates the DM’s belief accordingly only if the DM ultimately decides to act on the task. In this setup, we characterize the evolution of the DM’s belief and overruling decisions over time. We identify situations under which the DM hesitates forever whether the machine is better; that is, the DM never fully ignores but regularly overrules it. Moreover, the DM sometimes wrongly believes with positive probability that the machine is better. We fully characterize the conditions under which these learning failures occur and explore how mistrusting the machine affects them. These findings provide a novel explanation for human–machine complementarity and suggest guidelines on the decision to fully adopt or reject a machine.This paper was accepted by Elena Katok, special issue on the human–algorithm connection.Supplemental Material: The online appendix is available at https://doi.org/10.1287/mnsc.2023.4791 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call