Abstract

ABSTRACT Public and private organizations are increasingly implementing various algorithmic decision-making systems. Through legal and practical incentives, humans will often need to be kept in the loop of such decision-making to maintain human agency and accountability, provide legal safeguards, or perform quality control. Introducing such human oversight results in various forms of semi-automated, or hybrid decision-making – where algorithmic and human agents interact. Building on previous research we illustrate the legal dependencies forming an impetus for hybrid decision-making in the policing, social welfare, and online moderation contexts. We highlight the further need to situate hybrid decision-making in a wider legal environment of data protection, constitutional and administrative legal principles, as well as the need for contextual analysis of such principles. Finally, we outline a research agenda to capture contextual legal dependencies of hybrid decision-making, pointing to the need to go beyond legal doctrinal studies by adopting socio-technical perspectives and empirical studies.

Highlights

  • Building on previous research we illustrate the legal dependencies forming an impetus for hybrid decision-making in the policing, social welfare, and online moderation contexts

  • We highlight the further need to situate hybrid decision-making in a wider legal environment of data protection, constitutional and administrative legal principles, as well as the need for contextual analysis of such principles

  • The ambitions of integrating artificial intelligence (AI) in diverse public and private sectors are becoming increasingly apparent, with the European Commission spearheading a commitment to furthering the use of AI in public and private sectors.[1]

Read more

Summary

Background

The impetus to implement hybrid decision-making may vary In some cases, it may be driven by ambitions of increased efficiency where reducing human discretion is a specific goal which cannot fully be realized due to technical or legal constraints.[6] In other areas, such as online moderation, the need for human contextual analysis is well known, but the sheer scope of the task facing moderators and external pressures calls for further automation.[7] in many cases, keeping a human in the loop is a deliberate attempt to maintain human agency and accountability, and to provide legal safeguards and quality control. We approach the ambitions of implementing algorithmic decision making in three legal contexts; policing, social welfare systems, and online moderation. These environments are chosen as they are currently subject to intense efforts of automation due to both external pressure and internal ambitions and necessities. It points to the need for research into hybrid decision-making environments to go beyond legal doctrinal studies, by the implementation of a socio-technical perspective and the use of empirical studies

A brief note on terminology
Bureaucrats everywhere!
From public to private and beyond
The General Data Protection Regulation – a look at the trees
Beyond data protection – a brief look at the forest
Introduction
The moderator in the loop – staying nuanced in a tsunami of content
Looking back
Looking forward – the human in the machine
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call