Abstract

The use of autonomous weapons is becoming one of the most significant threats to humanity in today’s society. One of the major issues confronting the use of autonomous weapons is that of command responsibility. This paper aims to look into the rules governing the operation of Autonomous Weapon System (AWS) on the battlefield in particular with regard to the command responsibility under international humanitarian law. The study also elaborates on the controversy that arose among worldwide societies regarding the weapon’s development and deployment. The study is normative-empirical research, and the research is based on legal principles and facts. It employed a descriptive-analytical method. The study reveals that the use of AWS in armed conflict is not explicitly governed by international humanitarian law. The use of AWS could potentially jeopardize several general principles of international humanitarian law, including proportionality, distinction, military necessity, and limitation. If the use of AWS results in war crimes, the commander can be held liable. However, whether the notion of command responsibility can be applied to AWS weapons classified as “Human-out-of-the-Loop Weapons” is currently being contested. This is due to the weapon system’s ability to pick and shoot the targets without the need for human input or interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call