Abstract

AI-based, autonomous weapon systems (AWS) have the potential of weapons of mass destruction and thereby massively add to the intensifying dialectic of fear between ground and space and the pervasive mass human vulnerability of being tracked and targeted from above. Nevertheless, the dangerous effects of the proliferation of AWS have not been and still are not widely acknowledged. On the one hand, the capabilities and effects of AWS are downplayed by the military and the arms industry staging these systems as precise and clean. Recently, it is also argued that they can be built on the basis of a ‘responsible’ or ‘trustworthy’ artificial intelligence (AI). On the other hand, inadequate sociotechnical imaginaries of AI as a conscious, evil super-intelligence circulated by Hollywood blockbuster films such as 'Terminator' or 'Ex Machina' dominate the public discourse. Their massive overstatement of the power of the technology and also their focus on often irrelevant imaginaries such as the ‘Terminator’ hinders a realistic understanding of the AI’s capabilities. Against this background, arms control advocates develop new imaginaries to show the loss of ‘meaningful human control’ (Sharkey 2016) and its problematic consequences. In October 2023, the deployment of autonomous military in the battlefield has already been officially confirmed by an Ukrainian drone company (Hambling 2023).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call