Abstract

Concerns with errors, mistakes, and inaccuracies have shaped political debates about what technologies do, where and how certain technologies can be used, and for which purposes. However, error has received scant attention in the emerging field of ignorance studies. In this article, we analyze how errors have been mobilized in scientific and public controversies over surveillance technologies. In juxtaposing nineteenth-century debates about the errors of biometric technologies for policing and surveillance to current criticisms of facial recognition systems, we trace a transformation of error and its political life. We argue that the modern preoccupation with error and the intellectual habits inculcated to eliminate or tame it have been transformed with machine learning. Machine learning algorithms do not eliminate or tame error, but they optimize it. Therefore, despite reports by digital rights activists, civil liberties organizations, and academics highlighting algorithmic bias and error, facial recognition systems have continued to be rolled out. Drawing on a landmark legal case around facial recognition in the UK, we show how optimizing error also remakes the conditions for a critique of surveillance. This article is part of a special issue entitled “Histories of Ignorance,” edited by Lukas M. Verburgt and Peter Burke.

Highlights

  • This article is part of a special issue entitled “Histories of Ignorance,” edited by Lukas M

  • Aradau and Blanke: Algorithmic Surveillance and the Political Life of Error same time as computer experts became increasingly able to “speak authoritatively.”[4]. While in the twentieth century, error was mobilized in public debates in relation to weapon technologies in particular, error has recently emerged as a key argument against the use of another computational technology: automated facial recognition

  • How has machine learning transformed the understanding of error and how have problematizations of error translated into public controversies about facial recognition? We argue that engineers and scientists work with a machine learning epistemology of error that is often difficult to reconcile with public approaches to error

Read more

Summary

Algorithmic Surveillance and the Political Life of Error

Mistakes, and inaccuracies have shaped political debates about what technologies do, where and how certain technologies can be used, and for which purposes. While in the 1950s debates about complex weapons systems were shaped by the “disciplinary repertoire” of physics and electrical engineering, in the 1960s another argument was introduced to the public debate—that missile weapons systems would “lead to an unprecedented reliance on complex, failure-prone computers.”[30] Slayton’s analysis of “arguments that count” has highlighted the increasing disciplinary authority of computer science and the ability of computer scientists and software engineers to speak about the risks of complex systems This ability was developed through different classifications of error: errors that can be calculated and errors that are due to the unpredictability of human practices and social institutions. In the following two sections, we retrace two different moments in the development of surveillance technologies: the biometric technologies developed by Bertillon and Galton in the nineteenth century and algorithmic processing of facial images in the twenty-first century

Taming Error and Biometric Surveillance
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call