Sort by
Unfuturing peace: augmented reality image design for Guerilla peacebuilding

AbstractThis project explores the potential of image-making in augmented reality (AR) technologies as means of designing sustaining quality peace futures—unfuturing peace, focusing on Ukraine’s heroic defense against Russia’s 2022–2024 full-scale war of aggression as a case study. Employing the methodology of compositional interpretation and the conceptual tool “futures images,” the project theoretically and practically differentiates between defuturing and unfuturing as peace design processes in developing an essay of originally designed marker-based Augmented Reality Posters in Support of Ukraine as demos of sustaining quality peace arrangements. The posters reference the topics of (physical) integrity of Ukrainian symbols, global food security and the security of the LGBTQI+ community in Ukraine. The technological artistic process/outcomes of this AR image-making experiment and their relation to power layouts in peacebuilding form the bases for theorizing how AR-supported futures design in war-affected communities—unfuturing peace—could facilitate “guerrilla peacebuilding.” In outlining theoretical and practical premises of guerilla peacebuilding, the project intersects Augmented Reality Posters in Support of Ukraine with explorations of guerilla warfare and counterinsurgency efforts leading to the 2016 Havana Peace Agreements in Colombia as well as mobile technologies/power in guerrilla approaches to democratic development.

Open Access
Relevant
Autonomous drone swarms and the contested imaginaries of artificial intelligence

AI-based, autonomous weapon systems (AWS) have the potential of weapons of mass destruction and thereby massively add to the intensifying dialectic of fear between ground and space and the pervasive mass human vulnerability of being tracked and targeted from above. Nevertheless, the dangerous effects of the proliferation of AWS have not been and still are not widely acknowledged. On the one hand, the capabilities and effects of AWS are downplayed by the military and the arms industry staging these systems as precise and clean. Recently, it is also argued that they can be built on the basis of a ‘responsible’ or ‘trustworthy’ artificial intelligence (AI). On the other hand, inadequate sociotechnical imaginaries of AI as a conscious, evil super-intelligence circulated by Hollywood blockbuster films such as 'Terminator' or 'Ex Machina' dominate the public discourse. Their massive overstatement of the power of the technology and also their focus on often irrelevant imaginaries such as the ‘Terminator’ hinders a realistic understanding of the AI’s capabilities. Against this background, arms control advocates develop new imaginaries to show the loss of ‘meaningful human control’ (Sharkey 2016) and its problematic consequences. In October 2023, the deployment of autonomous military in the battlefield has already been officially confirmed by an Ukrainian drone company (Hambling 2023).

Open Access
Relevant
Algorithmic predictions and pre-emptive violence: artificial intelligence and the future of unmanned aerial systems

AbstractThe military rationale of a pre-emptive strike is predicated upon the calculation and anticipation of threat. The underlying principle of anticipation, or prediction, is foundational to the operative logic of AI. The deployment of predictive, algorithmically driven systems in unmanned aerial systems (UAS) would therefore appear to be all but inevitable. However, the fatal interlocking of martial paradigms of pre-emption and models of predictive analysis needs to be questioned insofar as the irreparable decisiveness of a pre-emptive military strike is often at odds with the probabilistic predictions of AI. The pursuit of a human right to protect communities from aerial threats needs to therefore consider the degree to which algorithmic auguries—often erroneous but nevertheless evident in the prophetic mechanisms that power autonomous aerial apparatuses—essentially authorise and further galvanise the long-standing martial strategy of pre-emption. In the context of unmanned aerial systems, this essay will outline how AI actualises and summons forth “threats” through (i) the propositional logic of algorithms (their inclination to yield actionable directives); (ii) the systematic training of neural networks (through habitually biased methods of data-labelling); and (iii) a systemic reliance on models of statistical analysis in the structural design of machine learning (which can and do produce so-called “hallucinations”). Through defining the deterministic intentionality, systematic biases and systemic dysfunction of algorithms, I will identify how individuals and communities—configured upon and erroneously flagged through the machinations of so-called “black box” instruments—are invariably exposed to the uncertainty (or brute certainty) of imminent death based on algorithmic projections of “threat”.

Open Access
Relevant