- New
- Research Article
- 10.1080/13523260.2026.2639709
- Mar 10, 2026
- Contemporary Security Policy
- Jana Baldus + 1 more
ABSTRACT Discourses on nuclear weapons and military applications of artificial intelligence (AI) portray them either as apocalyptic super weapons, posing catastrophic risks, or as indispensable to states’ national survival and the international security architecture. At the same time, debates about and governance efforts to regulate these weapons have become contested, stalled, or even abandoned. We examine the intersection of both regulatory discourses by asking: How do contemporary apocalyptic discourses about military AI and nuclear weapons shape international security governance of these technologies? Through the concept of apocalyptic imaginaries, we analyze how future-oriented visions capture the simultaneous utopian and dystopian implications of destructive technologies. We identify two cross-cutting apocalyptic imaginaries—exceptionalism and control—that produce specific security governance practices. Our findings reveal how shared apocalyptic imaginaries shape regulatory approaches, increasingly prioritizing risk management and non-proliferation over systemic discussions of disarmament or preventive prohibitions.
- New
- Research Article
- 10.1080/13523260.2026.2625869
- Mar 7, 2026
- Contemporary Security Policy
- Neil Renic
ABSTRACT In this article, I detail the “power and persistence” of civilizational ideas in the Western military-technological space. Technology, I argue, has long functioned as a signifier of so-called “Western civilization”: technological mastery was and continues to be drawn upon as a marker of “civilized peoples” and “civilized warfare”. Technology has also functioned as a safeguard of Western civilization, with techno-military innovations harnessed to contest and dominate others, including those deemed “uncivilized”. Though linked through a shared assumption of civilizational pre-eminence, these dual understandings of technology have historically produced a tension, between restrained and limitless violence. This civilizational imaginary endures today in the context of military AI. Assumptions of civilizational supremacy, I argue, underpin Western claims of responsible technological custodianship, while oxygenating discourses and practices of unrestrained violence.
- New
- Research Article
- 10.1080/13523260.2026.2635959
- Mar 6, 2026
- Contemporary Security Policy
- Berenike Prem
ABSTRACT This article critically examines the control-by-design imaginary in the development of autonomous weapons systems (AWS): the belief that technical safeguards and human ingenuity can resolve the risks they pose. Adopting a technopolitical lens, it conceptualizes AWS design as a heterogeneous process shaped by technical, institutional, and political forces. The analysis highlights two key practices: encoding target profiles and determining and validating error rates. It argues that control-by-design remains partial and fragile, yet is temporarily stabilized through design and testing processes that translate political and military judgments about legitimate targets and acceptable error into classificatory models and performance thresholds, creating provisional assurances of control. However, this imaginary is continually unsettled by the dynamics of military innovation—geopolitical competition, institutional pressures for speed and advantage, and operational demands—that sideline legal and ethical concerns. Moreover, AWS operate in dynamic, adversarial environments that destabilize pre-encoded classifications and risk-based testing, evaluation, verification, and validation.
- New
- Research Article
- 10.1080/13523260.2026.2638846
- Mar 4, 2026
- Contemporary Security Policy
- Rubrick Biegon + 2 more
ABSTRACT References to the “imaginary” have become prevalent in the study of technology and war in contemporary world politics. As an introduction to this special issue, this article interrogates the “turn” to the “imaginary” by tracing how the concept has been deployed across three overlapping research traditions pertaining to social imaginaries, sociotechnical imaginaries, and security imaginaries. The article addresses two key questions at the heart of this research agenda: what are the core analytical properties of the imaginary, and which research methods can be used to study this concept? In contextualizing the core themes examined throughout the special issue, the article seeks to spur debate on the “value-added” of imaginaries in the study of technology, war, and security in (critical) IR and Security Studies scholarship at a time of renewed great power competition.
- New
- Research Article
- 10.1080/13523260.2026.2622863
- Feb 28, 2026
- Contemporary Security Policy
- Thomas Wilkins
ABSTRACT The term “strategic partnership” would not have been identifiable as a codified phenomenon within the International Relations (IR) lexicon until relatively recently. Over the past three decades, the label became widely adopted in the realm of international diplomacy to specifically denominate cases of augmented bilateral relationships between states (and other actors). Tapping into the accumulating corpus of IR literature that has responded to this development, this article seeks to build out the strategic partnership paradigm as a cogent referent for study within the discipline. It does this through a reinterrogation of existing conceptual assumptions, an evaluation of theoretical and analytical research methodologies, and an identification of remaining gaps in our epistemic knowledge of the phenomenon. In the process, it reaffirms the author's case for recognizing strategic partnerships as distinctive modes of security cooperation, with the more robust (“strong”) cases to be classified as security alignments.
- New
- Research Article
- 10.1080/13523260.2026.2624653
- Feb 13, 2026
- Contemporary Security Policy
- Marina E Henke + 2 more
ABSTRACT Russia has threatened to use tactical nuclear weapons repeatedly in its war on Ukraine to intimidate and break the cohesion of NATO. Does such a strike potentially constitute a winning strategy? This article addresses this question via four waves of vignette-based survey experiments, two prior and two post Russia’s invasion of Ukraine, in Germany and the United States, two critical NATO members with diverse cultural sensibilities. Contrary to our expectation, we find that public attitudes in both countries are remarkably similar and favor in the case of a Russian tactical nuclear strike not a conciliatory but a retaliatory NATO response. Moreover, public attitudes following Russia’s invasion of Ukraine have grown even more hawkish.
- Research Article
- 10.1080/13523260.2025.2599250
- Feb 4, 2026
- Contemporary Security Policy
- Tom F A Watts
ABSTRACT This article draws from the International Relations literature on security imaginaries to examine how the development of AI has been socially constructed by defense planners in the United States as a key technological domain of great power competition. It argues that to fully understand this relationship, we need to recognize how the offset imaginary promoted as part of the Third Offset Strategy has shaped a specific view of these technologies' desired geopolitical purpose. The offset imaginary has framed technological innovation, the development of new warfighting concepts, and organizational adaptation as key to sustaining the DoD's battlefield edge over competitors with larger militaries. By analyzing the institutionalization of this imaginary in the Pentagon's AI adoption efforts in the decade after November 2014, this article calls for greater awareness of how security imaginaries shape American defense planning in today's era of great power competition.
- Research Article
- 10.1080/13523260.2025.2596143
- Jan 22, 2026
- Contemporary Security Policy
- Christopher Lawrence
ABSTRACT Scholars of brinkmanship have long debated whether nuclear crises are dominated by a balance of resolve, or whether a technologically superior competitor may offset that balance with advanced weapons. Recent studies argue superiority matters, and that accuracy of modern strategic missiles creates a “new era of counterforce dominance.” Yet counterforce enthusiasts overlook an important technological headwind: the complexity of advanced weapon systems can confound nuclear planners’ ability to predict weapon performance in a real nuclear exchange. This challenge is particularly acute for counterforce systems that cannot be tested in operational settings, and whose failure would bring catastrophic consequences on their user. Drawing from complexity theory and science and technology studies, I argue that contemporary nuclear competitions are beset by a balance of nuclear humility: states with more technologically demanding nuclear doctrines can be less confident of their technical knowledge, and hence less certain in success of their nuclear missions. More humble competitors that merely threaten retaliation can address their vulnerabilities with relatively low-tech modifications of existing technologies. I illustrate an example using Monte Carlo simulation of counterforce strikes with modern US strategic missiles on China’s silo-based missile force. I show that small variations in parameters that the attacker cannot precisely know correspond to wide variation in strike outcomes, and that some of those unknowns emerge from sophistication of advanced weapons themselves. The resulting uncertainty in costs to attacker complicates popular strategic theories of damage limitation.
- Research Article
- 10.1080/13523260.2025.2612518
- Jan 20, 2026
- Contemporary Security Policy
- Justinas Lingevicius
ABSTRACT The article analyses how the European Union (EU) constructs AI-related security in its emerging AI policy. It conceptually engages with the elements of riskification: a referent object and potential harm. By conducting a discourse analysis of selected documents and semi-structured expert interviews, both fundamental rights and a democratic political system emerge as referent objects, while intrusion, discrimination and AI autonomy are posited as conditions of potential harm to them. The analysis contributes to the debates on AI-related security by introducing the concept of agentic security. It focuses on protecting human agency, understood as the capacity to sustain control and decision-making power vis-à-vis AI. This security logic is driven by the imaginary of human-machine interaction, aiming to embed AI in the EU’s priorities, and refuses other constellations between human agency and machines. This position, grounded in anthropocentric views, seeks to establish AI governance that supports and protects a liberal order.
- Research Article
- 10.1080/13523260.2025.2609732
- Jan 9, 2026
- Contemporary Security Policy
- William Akoto
ABSTRACT Russia has repeatedly faced allegations of interfering in US elections through cyber and information operations. While the existence of these operations is well-documented, little is known about how Russian state-sponsored cyber proxies strategically counter these accusations on social media. In this article, I analyze 3,424 tweets from 287 Russian proxy accounts identified by X (formerly Twitter), focusing on their responses to election interference claims between 2010 and 2020. Using topic modeling, cluster analysis, and rhetorical analysis, I uncover sophisticated tactics employed by these proxies, including narrative control, strategic retweeting, and reactive engagement, designed to amplify doubt, discredit accusers, and deepen social divisions. My findings highlight the nuanced ways in which state-sponsored proxies leverage social media dynamics to manipulate public perceptions. This study informs contemporary policy debates around misinformation and platform governance, providing valuable insights for safeguarding democratic processes against state-backed disinformation campaigns.