Abstract

A few years back, the rapid progress of international efforts to ban lethal autonomous weapon systems (LAWS) left arms controllers amazed: only five years after the founding of the International Committee for Robot Arms Control (ICRAC), the dangers of autonomous weapons were being debated in a UN context, the Convention on Certain Conventional Weapons (CCW), with non-state actors and state actors alike finding common ground in rejecting weapon systems beyond human control. Since then, however, the debate has made little progress, despite increasing pressure by activists and a strong international campaign. In this article, we will argue that the strategies used by campaigners, based on ethical and legal concerns, must be complemented by classic security-related arguments. Unfortunately, key lessons of the Cold War, including the mutual security benefits of arms control, seem to have been forgotten. Many concepts that are central to arms control—such as stability and verification—are by no means intuitively understood and must apparently be (re-)“learned”. Some of the world’s most important actors have not been exposed to these concepts, e.g. China, but also other players. Deconstructing military expectations regarding autonomous weapons and focusing on a preventive arms control approach could help the currently stalled process to regain the momentum it needs.

Highlights

  • Rumours had been circulating for some time, but on 7 August 2020 it became official: the American Defense Advanced Research Projects Agency (DARPA), tasked with exploring the latest military technologies, announced a computer simulated dogfight between a US Air Force pilot and an artificial intelligence (AI) (DARPA 2020)

  • The question whether human dignity is violated by autonomous killing is an important one, as it transcends international humanitarian law (IHL)-based arguments. While it is an empirical matter whether an algorithm is capable of distinguishing between soldiers and civilians, it is argued that the very fact that being killed by a machine decision violates the dignity of both civilians and combatants grounds a categorical rejection of lethal autonomous weapon systems (LAWS) (Rosert and Sauer 2019, p. 372)

  • 6 Conclusions: What to do? Dealing with a stalled process. What does all this teach us about the current crisis in arms control? In this article, we have argued that LAWS should be regulated or even banned but that the reasons brought forward by most critics—i.e. violations of ethical principles and potential non-compliance with international law—may not be enough

Read more

Summary

Introduction

Rumours had been circulating for some time, but on 7 August 2020 it became official: the American Defense Advanced Research Projects Agency (DARPA), tasked with exploring the latest military technologies, announced a computer simulated dogfight between a US Air Force pilot and an artificial intelligence (AI) (DARPA 2020). We argue, first, that promoting preventive arms control requires criticizing and demystifying expectations regarding the military advantages of specific systems in order to bring major players into the debate This means that hard security or even military arguments brought forward to justify research and development must be taken seriously and addressed in earnest. Focusing on security and arms control comes with a price tag, : we must grapple with the fact that some potential applications of a given weapon system may follow a reasonable (albeit military) logic and should not be treated lightly, especially when cheating is easy and verifying compliance is hard This may lead to the conclusion that a complete ban—while desirable—is currently unfeasible and that compromises must be made. The debate must focus on whether and how classic arms control concepts and aims such as stability and verification can foster a tough and resilient arms control regime in the field of autonomous weapons, making it more attractive and overcoming what many perceive as a crisis, or at least a stand-still

War at machine speed
From scientific criticism to an international campaign
The main arguments of the critics
Meaningful human control
Underestimated security policy consequences—a lever for arms control?
Arms control of LAWS and other “emerging technologies”
Gaining human control by decelerating decision-making and military action
Regulating the use of LAWS by rules of engagement
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call