Abstract

Algorithms are increasingly used in different domains of public policy. They help humans to profile unemployed, support administrations to detect tax fraud and give recidivism risk scores that judges or criminal justice managers take into account when they make bail decisions. In recent years, critics have increasingly pointed to ethical challenges of these tools and emphasized problems of discrimination, opaqueness or accountability, and computer scientists have proposed technical solutions to these issues. In contrast to these important debates, the literature on how these tools are implemented in the actual everyday decision-making process has remained cursory. This is problematic because the consequences of ADM systems are at least as dependent on the implementation in an actual decision-making context as on their technical features. In this study, we show how the introduction of risk assessment tools in the criminal justice sector on the local level in the USA has deeply transformed the decision-making process. We argue that this is mainly due to the fact that the evidence generated by the algorithm introduces a notion of statistical prediction to a situation which was dominated by fundamental uncertainty about the outcome before. While this expectation is supported by the case study evidence, the possibility to shift blame to the algorithm does seem much less important to the criminal justice actors.

Highlights

  • The increased use of algorithmic decision-making (ADM) systems in many domains of public life has spurred a debate about the chances and risks involved

  • Optimists emphasize that algorithms are capable of recognizing patterns in enormous amounts of data very rapidly—tasks humans would never be able to fulfill at similar speed. They hold that evidence-based decision-making is enhanced by the use of artificial intelligence, not least because ADM systems do not suffer from the well-known psychological biases

  • Our findings indicate that the main appeal of using the ADM system comes from two sources

Read more

Summary

Introduction

The increased use of algorithmic decision-making (ADM) systems in many domains of public life has spurred a debate about the chances and risks involved. For decennia, judges and other actors in the CJ system have followed strict rules and standard procedures in these cases and relied mainly on expert knowledge (mainly delivered by reports from psychologists, social workers and others) to inform their decision With algorithmic tools, this situation of fundamental uncertainty changes into one of statistical risk. Scholars of public administration and political scientists have both found actors are reluctant to take decisions that may have harmful consequences In such situations, blame-avoidance strategies are used to delegate, blur or shift responsibility for the decision—just in case that it turns out to have negative consequences (Hinterleitner 2017; Weaver 1986; Vis and Van Kersbergen 2007; König and Wenzelburger 2014; Hood 2011). We investigate whether and to what extent blame avoidance is mentioned in the actor’s description of the changed decision-making context

Objective odds
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.