One major area of concern in relation to the use of autonomous weapon systems is that it involves humans giving some, if not all, control over a weapon system to a form of computer. This idea relates to the concerns that a computer’s ability to autonomously operate weapon systems puts the control of these systems beyond the bounds of the armed forces. This article examines the role that the concept of meaningful human control plays in the ongoing discourse, describes current perspectives of what meaningful human control entails, and reviews its value in the context of the analysis of AWS presented in this article. Within this article, as is the case in the wider debate, the term meaningful human control is used to describe a quality that is perceived to be essential for a given attack to be considered to be compliant with international humanitarian law rules. It does not denote a specific class of weapon systems that permit or require a minimum level of human control; rather, it infers that a weapon that is used in an attack that is legally compliant with international humanitarian law rules would essentially incorporate a meaningful level of human control.
Read full abstract