Abstract

Abstract Weaponised artificial intelligence (AI) and the prospective development of lethal autonomous weapon systems (LAWS) are topics that have sparked international debate on retaining human control over the use of force. This article unpacks China’s understanding of human–machine interaction to find that it encompasses many shades of grey. Specifically, despite repeatedly supporting a legal ban on LAWS, China simultaneously promotes a narrow understanding of these systems that intends to exclude such systems from what it deems “beneficial” uses of AI. We offer understandings of this ambivalent position by investigating how it is constituted through Chinese actors’ competing practices in the areas of economy, science and technology, defence, and diplomacy. Such practices produce normative understandings of human control and machine autonomy that pull China’s position on LAWS in different directions. We contribute to the scholarship bounded by norm research and international practice theories in examining how normativity originates in and emerges from diverse domestic contexts within competing practices. We also aim to provide insights into possible approaches whereby to achieve consensus in debates on regulating LAWS, which at the time of writing have reached a stalemate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call