Abstract

AbstractWill AI-enabled capabilities increase inadvertent escalation risk? This article revisits Cold War-era thinking about inadvertent escalation to consider how Artificial Intelligence (AI) technology (especially AI augmentation of advanced conventional weapons) through various mechanisms and pathways could affect inadvertent escalation risk between nuclear-armed adversaries during a conventional crisis or conflict. How might AI be incorporated into nuclear and conventional operations in ways that affect escalation risk? It unpacks the psychological and cognitive features of escalation theorising (the security dilemma, the ‘fog of war’, and military doctrine and strategy) to examine whether and how the characteristics of AI technology, against the backdrop of a broader political-societal dynamic of the digital information ecosystem, might increase inadvertent escalation risk. Are existing notions of inadvertent escalation still relevant in the digital age? The article speaks to the broader scholarship in International Relations – notably ‘bargaining theories of war’ – that argues that the impact of technology on the cause of war occurs through its political effects, rather than tactical or operational battlefield alterations. In this way, it addresses a gap in the literature about the strategic and theoretical implications of the AI-nuclear dilemma.

Highlights

  • How might Artificial Intelligence (AI)-enabled capabilities increase inadvertent escalation risk? This article revisits Cold War-era thinking about inadvertent escalation to consider how artificial intelligence (AI) technology[1] through

  • To what extent might AI-enabled capabilities increase inadvertent escalation risk? In a global security environment characterised by great power strategic competition and regional strategic asymmetry, new rungs, firebreaks, and thresholds on the escalation ladder are already challenging conventional assumptions of deterrence, strategic stability, and escalation

  • This article underscores the need for greater clarity and discussion on the specific characteristics of AI technology that may create new rungs on the metaphorical escalation ladder, and in turn, increase the risk of inadvertently transitioning crises between nuclear-armed states from conventional to nuclear confrontation

Read more

Summary

Introduction

How might AI-enabled capabilities increase inadvertent escalation risk? This article revisits Cold War-era thinking about inadvertent escalation to consider how artificial intelligence (AI) technology[1] (especially AI augmentation of advanced conventional counterforce weapons) through. States can be at different rungs or thresholds along the ‘relatively continuous’ pathways to war.[39] Despite its limitations, Kahn’s ‘escalation ladder’ is a useful metaphorical framework to reflect on the possible available options (for example, a show of force, reciprocal reprisals, costly signalling, and pre-emptive attacks), progression of escalation intensity, and scenarios in a competitive nuclear-armed dyad. During a crisis or conflict, continuous feedback about an adversary’s intentions, where it views itself on the escalation ladder, and how shifts in the scope or intensity of a situation (that is, kinetic, non-kinetic, or rhetorical) may be perceived.[40] Because of the inherently subjective nature of escalation, actions perceived as escalatory by one state can be misunderstood as by others.[41] What characteristics of AI technology may create new rungs on the escalation ladder that increase the inadvertent risk of escalating a conventional conflict to nuclear war?.

James Johnson
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call