Abstract

This essay considers how the development and fielding of autonomous weapon systems will implicate compliance with the law of armed conflict (LOAC). It argues that assessing the legality of such weapon systems will be facilitated by analogy to the human soldier. Specifically, the essay considers how inherent human cognitive autonomy is managed through the process of training and responsible command to ensure the soldier functions consistent with the requirements of the LOAC. It then suggests that because fielding commanders will have little opportunity to influence the artificial intelligence based judgments of future autonomous weapon systems, they must undergo a 'compliance validation' process that produces a higher degree of confidence in LOAC compliance than that expected of the human soldier. This leads to consideration of the relationship between the traditional doctrine of command responsibility and the expectation of LOAC compliance for the human soldier, and why this doctrine cannot be expected to produce the same effect with regard to autonomous weapons. Accordingly, the essay proposes that the focal point of 'responsibility' must be reconsidered in relation to autonomous weapons, recasting the doctrine to one more focused on 'procurement responsibility.' The essay closes by raising and answering several theoretical questions related to the development and employment of autonomous weapons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call