Abstract

While the applications of artificial intelligence (AI) for militaries are broad and go beyond the battlefield, autonomy on the battlefield, in the forms of lethal autonomous weapon systems (LAWS), represents one possible usage of narrow AI by militaries. Research and development on LAWS by major powers, middle powers, and non-state actors makes exploring the consequences for the security environment a crucial task. This paper draws on classic research in security studies and examples from military history to assess how LAWS could influence two outcome areas: the development and deployment of systems, including arms races, and the stability of deterrence, including strategic stability, the risk of crisis instability, and wartime escalation. It focuses on these questions through the lens of two characteristics of LAWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices. It also examines how these issues interact with the large uncertainty parameter associated with potential AI-based military capabilities at present, both in terms of the range of the possible and the opacity of their programming.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call