Abstract
ABSTRACTContemporary discussions and disagreements about the deployment of machine learning, especially in criminal justice contexts, have no foreseeable end. Developers, practitioners and regulators could however usefully look back one hundred years to the similar arguments made when polygraph machines were first introduced in the United States. While polygraph devices and machine learning operate in distinctly different ways, at their heart, they both attempt to predict something about a person based on how others have behaved. This paper, through an historical perspective, examines the development of the polygraph within the justice system – both in courts and during criminal investigations – and draws parallels to today’s discussion. It can be argued that the promotion of lie detectors supported a reforming legal realist approach, something that continues today in the debates over the deployment of machine learning where ‘public good’ aims are in play, and raises questions around how key principles of the rule of law can best be upheld. Finally, this paper will propose a number of regulatory solutions informed by the early lie detector experience.
Highlights
If machine learning and artificial intelligence were people, they would be teenagers, and young teenagers at that
Guided by the early polygraph experience, and giving consideration to the way that admission of scientific findings and expert testimony in court is assessed, this paper proposes the development and application of appropriate ‘scientific validity’ and relevance standards for AI and machine learning
The author has previously called for the development of a framework around the use of automated facial recognition as a trigger for intervention and in an evidential concept, bearing in mind the officer’s over-arching decision-making prerogative (Kotsoglou and Oswald 2020). Such standards could augment the work of existing oversight bodies, and complement Hildebrandt’s proposal for preregistration of machine learning research design (Hildebrandt 2018) and Nemitz’s call for a precautionary legal framework around artificial intelligence, and the generalisation for AI of regulatory principles found in specific bodies of law, (Nemitz 2018) namely here those relating to the admission of expert evidence
Summary
If machine learning and artificial intelligence were people, they would be teenagers, and young teenagers at that. A charismatic proponent of the use of lie detector testimony in court, Marston’s evidence based on his test was put forward in the Frye case.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Review of Law, Computers & Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.