Algorithms of Violence:Critical Social Perspectives on Autonomous Weapons Peter Asaro (bio) INTRODUCTION For the past six years, a treaty body within the United Nations called the Convention on Certain Conventional Weapons (CCW) has held a series of informal Meetings of Experts, followed by an ongoing series of formal Group of Governmental Experts (GGE) meetings on the questions surrounding lethal autonomous weapons systems (LAWS): What are they, and should their development and use by militaries be prohibited or restricted in any way? These discussions have focused primarily on how existing law might, or might not, apply to such systems; where the technology is heading; and how modern militaries develop, evaluate, and deploy systems with high degrees of automation (United Nations Office at Geneva 2019). To a lesser extent, they have also explored the moral and ethical dimensions of such systems, including a presentation by myself in 2014 (Asaro 2014). Civil society has made clear its view that autonomous weapons pose numerous threats that are best addressed by prohibiting their development and use (www.stopkillerrobots.org). There has, however, been only limited discussion of the political implications of these systems, and almost no discussion of the socioeconomic implications. I have written elsewhere about the ethical [End Page 537] and legal implications of autonomous weapons, as well as the risks they pose to global security (Asaro 2012, 687–709). In this paper, I take a more critical long-term view and investigate how the development and widespread adoption and use of autonomous weapons might transform the politics and economics of our societies. Such a discussion is urgently needed and could further inform the general public about the consequences of failing to prohibit or regulate autonomous weapons. More broadly, there is rapidly growing public interest in the ways that algorithms shape our lives politically, socially, and economically. Small automated decisions, with a variety of built-in assumptions, and sometimes based on patterns learned from deeply biased datasets, are increasingly having more frequent, and more significant, impact on our lives. But if the cumulative effect of decisions that control access to healthcare, jobs, education, and loans has a disturbing power to shape society at large, surely automating decisions to use violent and lethal force against humans would have similar if not greater impacts on human social and political relationships. Yet there has been little consideration of the social and political implications of automating violence, apart from considerations of how such weapons might upset military balances of power and destabilize regional and global politics through arms races. This paper aims to fill that void. DEFINING AUTONOMOUS WEAPONS AND VIOLENT ALGORITHMS In the ongoing discussions at the United Nations, there has been some measure of confusion over the precise meaning of lethal autonomous weapons systems, as well as various alternative terms put forth. The alternatives attempt to address different aspects of the concept, such as fully autonomous weapons, or to include less-lethal weapons by dropping the "lethal" portion of the term. Attempting to define "autonomous" has proven a particular sticking point that raises questions about the fundamental nature of agency, self-determination, and [End Page 538] causality. Moreover, the term "autonomy" is used in different ways by philosophers, engineers, lawyers, and social scientists. Adding a modifier like "fully" further requires differentiating partial or semiautonomous systems from fully autonomous systems. Given the complex and debatable nature of autonomy, and the urgency of the concerns over autonomous weapons, it is advisable not to wait for a precise definition of autonomy and/or its degrees. Indeed, when we look to the kinds of systems that engineers have been calling "autonomous robotics," we find something very different from what philosophers have been calling "autonomous agents." Originally, engineers and roboticists used the term "autonomous robotics" to emphasize the fact that the robots could carry their own computer control systems rather than be tethered to them by large cables (Bekey 2005). With the advance of microelectronics, this form of autonomy is now trivial, though the term is still used to describe robots capable of navigating and manipulating in open or unstructured environments (as opposed to closed and structured environments like factories). Rather, what concerns us about the "autonomy" of weapons systems...