Abstract
ABSTRACT Public and private organizations are increasingly implementing various algorithmic decision-making systems. Through legal and practical incentives, humans will often need to be kept in the loop of such decision-making to maintain human agency and accountability, provide legal safeguards, or perform quality control. Introducing such human oversight results in various forms of semi-automated, or hybrid decision-making – where algorithmic and human agents interact. Building on previous research we illustrate the legal dependencies forming an impetus for hybrid decision-making in the policing, social welfare, and online moderation contexts. We highlight the further need to situate hybrid decision-making in a wider legal environment of data protection, constitutional and administrative legal principles, as well as the need for contextual analysis of such principles. Finally, we outline a research agenda to capture contextual legal dependencies of hybrid decision-making, pointing to the need to go beyond legal doctrinal studies by adopting socio-technical perspectives and empirical studies.
Highlights
Building on previous research we illustrate the legal dependencies forming an impetus for hybrid decision-making in the policing, social welfare, and online moderation contexts
We highlight the further need to situate hybrid decision-making in a wider legal environment of data protection, constitutional and administrative legal principles, as well as the need for contextual analysis of such principles
The ambitions of integrating artificial intelligence (AI) in diverse public and private sectors are becoming increasingly apparent, with the European Commission spearheading a commitment to furthering the use of AI in public and private sectors.[1]
Summary
The impetus to implement hybrid decision-making may vary In some cases, it may be driven by ambitions of increased efficiency where reducing human discretion is a specific goal which cannot fully be realized due to technical or legal constraints.[6] In other areas, such as online moderation, the need for human contextual analysis is well known, but the sheer scope of the task facing moderators and external pressures calls for further automation.[7] in many cases, keeping a human in the loop is a deliberate attempt to maintain human agency and accountability, and to provide legal safeguards and quality control. We approach the ambitions of implementing algorithmic decision making in three legal contexts; policing, social welfare systems, and online moderation. These environments are chosen as they are currently subject to intense efforts of automation due to both external pressure and internal ambitions and necessities. It points to the need for research into hybrid decision-making environments to go beyond legal doctrinal studies, by the implementation of a socio-technical perspective and the use of empirical studies
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.