Autonomy in weapon systems is already a genuine concern. States try to come up with their own definitions of these systems and pay utmost effort to impose their own understanding of these systems upon other states. For a fairly high number of states barring a total ban on such weapons would be the ideal solution; however, such states that are anxious about the increase in autonomy in war-making capabilities, as adopts a second-best scenario to contain risks created by the deployment of such systems. To this end, placing them under meaningful human control emerges as an important political and legal objective. The author believes that placing autonomous weapons under human supervision, despite its initial promise, will yield negative results. This is due to the fact that humans tend rather to be too willing to follow the solutions generated by autonomous systems. First observed in other industries of civilian nature like aviation or health, automation bias has the potential to negate most if not all of supervision measures expected to ensure proper implementation of international humanitarian law.
Read full abstract