Abstract

Modern battlefields around the globe demonstrated the employment of the next generation of weapons which are colloquially designated as "killer robots", or Lethal Autonomous Weapon Systems (LAWS). Although LAWS, for now, are always under the supervision of the human operator, the technological advancements in Artificial Intelligence allow for such weapon systems to achieve a significant degree of autonomy, including the autonomy over the decision-making process of utilizing the lethal force against the human targets. Due to the lack of global regulation for the research, production, and deployment of LAWS, they are seeing more and more employment in contemporary battlefields, from Libya, Syria, Yemen, and Nagorno-Karabakh to Ukraine. The goals of this article are to understand the limitations of the AI that can be employed in LAWS; to present an overview of the current LAWS via the available public data; to assess the state of the regulations of the LAWS, by employing comparative analysis of strategies and positions towards LAWS from the side of the EU, the USA, China, Russia, and India. The results of this research demonstrate that barring the EU which is in the process of adopting a regulation that will enforce a total ban on the LAWS, the other major powers express a balanced approach towards this issue by reserving rights to develop and employ LAWS for the goals of their national security, per the Article 36 of the 1977 Additional Protocol I to the 1949 Geneva Conventions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call