Abstract

ABSTRACT This study aimed to investigate people’s willingness to accept algorithmic over human advice, under varying conditions of previous algorithmic performance and decision significance. We randomly presented hypothetical scenarios to 218 participants. Scenarios differed in relation to decision context (i.e., choices relating to taxi-routes, movies, restaurants, medical interventions, savings strategies, and bush fire evacuation), and within each scenario past algorithmic performance was also varied (equal, above average, or far greater than the human expert). Participants were asked to rate decision significance, and their likelihood of choosing the algorithmic advice over the human expert. Based on participants’ perceived decision significance, scenarios were classified as either low- or high-stakes. We tested for differences in participants’ ratings of algorithmic acceptance across levels of past performance and decision significance. Results revealed that as past accuracy and decision significance increased, the likelihood of algorithmic advice adoption also increased. An interaction between past accuracy and decision significance indicated increased algorithmic advice acceptance under conditions of far greater previous performance, in high-, compared to low-stakes scenarios. These findings are contrary to a large body of past research wherein people’s algorithm aversion persisted despite superior algorithmic performance and have implications to human-algorithm interaction and system design.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call