Abstract

The evolving field of machine learning and artificial intelligence is frequently presented as a positively disruptive branch of data science whose expansion allows for improvements in the speed, efficiency, and reliability of decision-making, and whose potential is impacting across diverse zones of human activity. A particular focus for development is within the criminal justice sector, and more particularly the field of international criminal justice, where AI is presented as a means to filter evidence from digital media, to perform visual analyses of satellite data, or to conduct textual analyses of judicial reporting datasets. Nonetheless, for all of its myriad potentials, the deployment of forensic machine learning and AI may also generate seemingly insoluble challenges. The critical discourse attendant upon the expansion of automated decision-making, and its social and legal consequences, resolves around two interpenetrating issues; specifically, algorithmic bias, and algorithmic opacity, the latter phenomena being the focus of this study. It is posited that the seemingly intractable evidential challenges associated with the introduction of opaque computational machine learning algorithms, though global in nature, are neither novel nor unfamiliar. Indeed, throughout the past decade and across a multitude of jurisdictions, criminal justice systems have been required to respond to the implementation of opaque forensic algorithms, particularly in relation to complex DNA mixture analysis. Therefore, with the objective of highlighting the potential avenues of challenge which may follow from the introduction of forensic AI, this study focusses on the prior experience of litigating, and regulating, probabilistic genotyping algorithms within the forensic science and criminal justice fields. Crucially, the study proposes that machine learning opacity constitutes an enhanced form of algorithmic opacity. Therefore, the challenges to rational fact-finding generated through the use of probabilistic genotyping software may be encountered anew, and exacerbated, through the introduction of forensic AI. In anticipating these challenges, the paper explores the distinct categories of opacity, and suggests collaborative solutions which may empower contemporary legal academics – and both legal and forensic practitioners - to set more rigorous and usable standards. The paper concludes by considering the ways in which academics, forensic scientists, and legal practitioners, particularly those working in the field of international criminal justice, might re-conceptualize these opaque technologies, opening a new field of critique and analysis. Using findings from case analyses, overarching regulatory guidance, and data drawn from empirical research interviews, this article addresses the validity, transparency, and interpretability problems, leading to a comprehensive assessment of the current challenges facing the introduction of forensic AI. It builds upon work undertaken at the Nuffield Council on Bioethics Horizon Scanning Workshop: The future of science in crime and security (5th July 2019, London).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call