Abstract

We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making discretion to algorithms that rigidly apply exogenously given human-created fairness principles to specific cases. In the second study, we found that people do not prefer humans to algorithms because they appreciate flesh-and-blood decision-makers per se, but because they appreciate humans’ freedom to transcend fairness principles at will. Our results contribute to a deeper understanding of algorithm aversion. They indicate that emphasizing the transparency of algorithms that clearly follow fairness principles might not be the only element for fostering societal algorithm acceptance and suggest reconsidering certain features of the decision-making process.

Highlights

  • The use of decision-making algorithms promises societal benefits in a wide variety of applications

  • The workers clearly preferred a human decision-maker with discretion. Did this occur because people value the involvement of flesh-and-blood beings in ethical decision-making or because they value moral autonomy in view of individual cases? Put differently, is it the human nature of the decision-making entity itself or the human capacity to transcend rules and apply one’s own ethical standards that causes participants to favor humans over algorithms? We wanted to answer this question in Study 2 to achieve a deeper understanding of the observed algorithm aversion

  • Workers’ regime choices were not driven by their desire to have a human being apply the fairness principle to an individual case

Read more

Summary

Introduction

The use of decision-making algorithms promises societal benefits in a wide variety of applications. For many such applications, decisions have moral implications. Fairness considerations based on the principle of equal opportunity require that creditworthy individuals or criminal individuals have the same chances of receiving a loan or of being arrested (Elzayn et al, 2019). It is especially in these morally sensitive domains that the use of algorithms faces societal resistance. Necessary responses could be located on the level of governance where certain laws (e.g., liability law) would have to be adjusted or on the educational level where certain fears would have to be addressed through a demystification of algorithms and their actual functioning

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.