Killed by Robot or Human: Considerations on Autonomous Weapons Systems and Human Dignity

  • TL;DR
  • Abstract
  • Literature Map
  • Similar Papers
TL;DR

This paper examines debates over autonomous weapon systems (AWS) and human dignity, analyzing arguments for and against prohibition. It suggests that proponents of limited use favor solutions beyond total bans and explores whether meaningful human control can reconcile differing perspectives and address human dignity concerns.

Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The emergence of autonomous weapon systems (AWS) has raised a number of concerns. In addition to issues related to the capability of these weapons to conform to the principles of international humanitarian law, primarily the principles of distinction and proportionality, concerns have also emerged regarding their compliance with human rights law. In this context, respect for human dignity has been cited as one of the major arguments against the use of AWS. The paper examines arguments for and against using the concept of human dignity as a rationale for prohibiting AWS. It demonstrates that those who oppose completely banning AWS do not necessarily believe that AWS conform to human dignity; rather, they offer different reasons why solutions other than total prohibition may be more appropriate. Finally, the paper explores whether implementing “meaningful human control” could bridge the gap between opposing standpoints on AWS and help resolve the human dignity dilemma.

Similar Papers
  • Research Article
  • 10.33327/ajee-18-6s013
Autonomous Weapon Systems: Attributing the Corporate Accountability
  • Jun 15, 2023
  • Access to Justice in Eastern Europe
  • Jinane El Baroudy + 1 more

Background: The use of autonomous weapon systems (AWS) in armed conflict has been rapidly expanding. Consequently, the development of AWS worries legal scholars. If AWS were to operate without ‘meaningful human control’, the violation of international law and human rights would be unpreventable. Methods: This paper indicates that the most important problem arising from the use of AWS is the attribution responsibility for the violation of corporate actors. Nevertheless, it is ambiguous who is legally responsible for these international crimes, thus creating an accountability gap. The main problem regarding corporate responsibility that covers the process of employing AWS is determining who exercises causal control over a chain of acts leading to the crime’s commission. The paper proposes a more optimistic view of artificial intelligence, raising two challenges for corporate responsibility. First, the paper maps the framework of the use of AWS regarding corporate actors. Second, the article identifies the problem of accountability by presenting some possible scenarios linked to the AWS context as a solution to this problem. Results and Conclusions: The results have exposed ambiguity in international law and the absence of essential laws regarding the attribution of responsibility for AWS and the punishment of the perpetrator – international law needs to be improved and regulated.

  • Research Article
  • Cite Count Icon 23
  • 10.1353/hrq.2016.0034
Human Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law Enforcement
  • Jan 1, 2016
  • Human Rights Quarterly
  • Christof Heyns

Much attention has been paid during the last couple of years to the emergence of autonomous weapons systems (AWS), weapon systems that allow computers, as opposed to human beings, to have increased control over decisions to use force. These discussions have largely centered on the use of such systems in armed conflict. However, it is increasingly clear that AWS are also becoming available for use in domestic law enforcement. This article explores the implications of international human rights law for this development. There are even stronger reasons to be concerned about the use of fully autonomous weapons systems--AWS without meaningful human control--in law enforcement than in armed conflict. Police officers--unlike their military counterparts--have a duty to protect the public. Moreover the judgments that are involved in the use of force under human rights standards require more personal involvement that those in the conduct of hostilities. Particularly problematic is the potential impact of fully autonomous weapons on the rights to bodily integrity (such as the right to life) and the right to dignity. Where meaningful human control is retained, machine autonomy can enhance human autonomy, but at the same time this means, higher standards of responsibility about the use of force should be applied because there is a higher level of human control. However, fully autonomous weapons entail no meaningful human control and, as a result, such weapons should have no role to play in law enforcement. Language: en

  • Research Article
  • 10.52063/25792652-2021.3-121
VIOLATIONS OF THE PRINCIPLES OF INTERNATIONAL LAW AND INTERNATIONAL HUMANITARIAN LAW DURING THE THIRD ARTSAKH WAR
  • Jan 1, 2021
  • Scientific Artsakh
  • Mariam Grigoryan

On September 27, as a result of the military actions unleashed by Azerbaijan, not only the basic principles of international law (jus cogens) underlying the negotiation process for the peaceful settlement of the Nagorno-Karabakh problem, such as the principle of peaceful settlement of disputes and the principle of respect for the right of nations to self-determination, but also the other fundamental norms and principles of international human rights law and international humanitarian law, were grossly violated. The purpose of this article is to investigate the violations of the principles of international law and international humanitarian law during the third Artsakh war. For this purpose, relevant international legal acts, theoretical literature, as well as specific cases of violations of international law and principles of international humanitarian law have been studied. During the research, the author applied both symmetrical (analysis, historical principle) and special (comparative-legal) principles. As a result of the research, we came to the conclusion that in the third Artsakh war, gross violations of international law and the principles of international humanitarian law took place.

  • PDF Download Icon
  • Research Article
  • 10.24294/jipd.v8i4.3455
The responsibility of corporate actors involved in international crimes through autonomous weapons systems (AWS) before the international criminal court (ICC)
  • Mar 11, 2024
  • Journal of Infrastructure, Policy and Development
  • Jinane El Baroudy

The use of autonomous weapons systems (AWS) has led to several opposing legal opinions regarding their violations of international law. The responsibility of the state, individuals, and corporations as producers, designers, and programmers is all being taken into consideration. If the decision to kill humans without “meaningful human control” is transferred to computers, it would be hard to attribute accountability for the actions of AWS to their corporations. Consequently, this means that corporate actors will enjoy impunity in all cases. The present paper indicates that the most significant problem arising from the use of AWS is the attribution of responsibility for its violation. Corporations are not subject to liability for the legitimate use of weapons under international law. The main problem with corporate responsibility, according to article 25 (4) of the Rome Statute, is that the provision only relates to individual criminal responsibility and that the ICC shall only have jurisdiction over natural persons. Nevertheless, corporations may be held accountable under aspects of international law. The paper proposes a more positive view on artificial intelligence, raising corporations’ accountability in international law by historically linking the judging of business leaders. The article identifies aiding and abetting as well as co-perpetration as the two modes of accountability under international law potentially linked to AWS. The study also explores the main ambiguity in international law relating to corporate aiding and abetting of human rights violations by presenting the confusion on determining the standards of these 2 modes of liability before the ICC and International ad doc Tribunal. Moreover, with the new age of war heavily dependent on AI and AWS, one cannot easily and precisely ascertain who must be held accountable for war crimes because of the unanticipated facts in decision-making combined with the aiding or abetting of violations of international law. International law prioritizes the goal of ending impunity for the individual and largely neglects the need to achieve the same goal for corporate complicity. In sum, progress to regulate the use of AWS by corporate actors could be enormously helpful to the cause of ending impunity.

  • Research Article
  • 10.24144/2307-3322.2025.90.5.22
Autonomous weapon systems and artificial intelligence as a challenge to international humanitarian law and human rights
  • Oct 14, 2025
  • Uzhhorod National University Herald. Series: Law
  • O.V Kutovyi + 1 more

The article explores autonomous weapon systems (AWS) operating with artificial intelligence as a complex challenge to contemporary international humanitarian law (IHL) and the international human rights framework. It analyses the technological capabilities and levels of autonomy of combat systems – including land, aerial, and naval unmanned platforms – that are already being used in current armed conflicts. Particular attention is given to the compliance of AWS with the core principles of IHL: distinction, proportionality, humanity, and the prohibition of indiscriminate attacks. The study substantiates the problem of «blurred» responsibility, particularly the difficulty of attributing violations committed by autonomous or semi-autonomous weapon systems to a specific accountable subject. It examines the risks posed by AWS to the observance of Articles 2, 3, 8, and 13 of the European Convention on Human Rights. The potential of the European Court of Human Rights’ case law to adapt to emerging technological realities through structured interpretation and the expansion of precedent is analysed. Special attention is given to international dialogue under the auspices of the United Nations – notably within the framework of the Convention on Certain Conventional Weapons (CCW), which addresses weapons deemed to cause excessive injury or have indiscriminate effects – and to the work of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, as well as the role of soft law, state positions, and international organisations. The article concludes with recommendations regarding the need to preserve meaningful human control, update legal mechanisms of responsibility, and develop a universal regulatory framework for AWS. It also highlights the importance of an interdisciplinary approach, particularly the integration of ethical, technical, and security considerations in shaping the legal regime governing the use of artificial intelligence systems and tools in military contexts.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 75
  • 10.1007/s10676-018-9494-0
Autonomous weapons systems, killer robots and human dignity
  • Dec 6, 2018
  • Ethics and Information Technology
  • Amanda Sharkey

One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro in Int Rev Red Cross 94(886):687–709, 2012; Docherty in Shaking the foundations: the human rights implications of killer robots, Human Rights Watch, New York, 2014; Heyns in S Afr J Hum Rights 33(1):46–71, 2017; Ulgen in Human dignity in an age of autonomous weapons: are we in danger of losing an ‘elementary consideration of humanity’? 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher in Autonomous weapons systems: law, ethics, policy, Cambridge University Press, Cambridge, 2016; Pop in Autonomous weapons systems: a threat to human dignity? 2018; Saxton in (Un)dignified killer robots? The problem with the human dignity argument, 2016). This paper critically examines the relationship between human dignity and AWS. Three main types of objection to AWS are identified; (i) arguments based on technology and the ability of AWS to conform to international humanitarian law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect. There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity.

  • Research Article
  • Cite Count Icon 16
  • 10.2139/ssrn.2972071
Defending the Boundary: Constraints and Requirements on the Use of Autonomous Weapon Systems Under International Humanitarian and Human Rights Law
  • May 24, 2017
  • SSRN Electronic Journal
  • Maya Brehm

The focus of scholarly inquiry into the legality of autonomous weapon systems (AWS) has been on compliance with IHL rules on the conduct of hostilities. Comparably little attention has been given to the impact of AWS on human rights protection. This paper aims to close this gap and to support multilateral policy discussions on AWS. It examines the requirements and constraints that IHRL places on the use of force by means of an AWS, both in relation to the conduct of hostilities and for law enforcement purposes, in times of peace as well as during armed conflicts. The use of a ‘sentry-AWS’ to control a boundary, secure a perimeter or deny access to an area, for example along an international border – a possible application envisaged by proponents of AWS – forms the backdrop to the legal discussion. The paper finds that, although AWS tend to be portrayed as ‘weapons of war’, IHL would never be the sole, and in many instances, it would not be the primary legal frame of reference to assess the legality of their use. Consideration of IHRL requirements and constraints on the use of AWS must therefore be a part of the debate on AWS, including in the framework of the 1980 Convention on Certain Conventional Weapons (CCW). Where IHL permits the ‘categorical’ targeting of security measures, including the use of force, there is scope for the lawful use of an AWS. However, due to procedural requirements and the need to individuate the use of force, this scope is extremely limited under IHRL. IHRL requirements and constraints apply to the use of an AWS in an armed conflict in so far as they are not displaced by IHL. To safeguard human dignity and human rights, human agents must remain involved in algorithmic targeting processes in a manner that enables them to explain the reasoning underlying algorithmic decisions in concrete circumstances. This is essential to ensuring the availability of an effective remedy, accountability for the use of force and for maintaining public confidence in states’ adherence to the rule of law, in times of peace as well as war.

  • Research Article
  • Cite Count Icon 9
  • 10.2139/ssrn.2754995
Defining the Emerging Notion of Meaningful Human Controll in Autonomous Weapon Systems (AWS) 2016
  • Mar 27, 2016
  • SSRN Electronic Journal
  • Thompson Chengeta

The emerging notion of ‘Meaningful Human Control’ (MHC) was suggested by NGO Article 36 as a possible solution to the challenges that are posed by Autonomous Weapon Systems (AWS). Various states, NGOs and scholars have welcomed this term. However, the challenge is that MHC is not defined in international law and as of present, there is no literature that extensively or normatively defines it. In this paper, I seek to discuss questions that I consider helpful in defining the MHC. Control that is exercised by humans over weapons they use has been changing in nature and degree. In the beginning, weapons were mere tools in the hands of fighters who exercised direct control. With the invention of technology, there has been considerable automation of control that was previously exercised by humans. The invention of drones has seen remote control of weapons, making it possible for humans to project force while thousands of miles away from the target. On the horizon are AWS, robotic weapons that once activated, do not need any further human intervention. In the case of AWS, humans seem to be ‘surrendering’ or delegating control of weapons to computers. In as much as this may seem convenient, efficient and safe, it raises far reaching concerns. For that reason, many scholars and organisations are insisting that MHC over weapons must be maintained. In order to define MHC, I propose that the international community must ask the following questions: i. What is the purpose of MHC? ii. Who should exercise that MHC over weapons and when? Is it manufacturers, programmers, the individuals who deploy them or all of them? iii. Over what aspects of AWS should one exercise MHC?In answering the above questions, I note that one of the major concerns is that AWS may create a legal responsibility vacuum. For that reason, I suggest that MHC exercised by humans over AWS should be of such a nature that the weapon user is potentially responsible for all ensuing actions of the robots. To define the nature of control that allows responsibility, I consider the international law jurisprudence on the notion of ‘control’ as the basis for responsibility. I point out that such control should be exercised over the ‘critical functions’ of AWS, in particular, those that relate to decision-making. There are already disagreements in the AWS debate as far as what decision-making means. I therefore discuss how that word should be defined as a step towards the definition of MHC.I note there are various actors involved in the development and deployment of AWS. The fundamental question is whether each actor needs to exercise MHC or whether the term should be defined as a cumulative concept – summing up the different roles that are played by designers, roboticists, programmers, manufacturers, states and combatants. I argue that if MHC is meant to be a legal standard upon which the responsibility for use of AWS is determined, then one of the common mistakes among debaters is the attempt to define MHC without a specific actor in mind. The suggestion that the definition of MHC should be a standard focussing on a specific actor is not to imply that there should be only one standard and all other actors should be forgotten. Rather, the term MHC should zero in on each actor, producing separate definitions and standards to which the different actors should adhere to. Because the control that is exercised by the aforementioned actors is subject to different standards, the test for the meaningfulness of the control exercised by each of them ought to be different.

  • Research Article
  • 10.24144/2307-3322.2025.91.5.18
Principles of international humanitarian law: content, evolution and modern challenges of application
  • Nov 22, 2025
  • Uzhhorod National University Herald. Series: Law
  • A V Zamryga

The article contains a description of the basic principles of international humanitarian law, which constitute the normative basis for regulating the behavior of parties in armed conflicts. Special attention is paid to the issue of the historical evolution of these principles. The article also contains an analysis of the relationship between these principles and the development of international human rights law and the practice of international judicial institutions and their influence on the formation of modern humanitarian standards. The article also describes the current challenges of their implementation in the context of the transformation of the nature of armed conflicts. It is substantiated that compliance with the principles of international humanitarian law remains a key factor in the legitimacy of the actions of the parties to the conflict and an indicator of the level of humanization of international relations. The article pays special attention to the relationship between the principles of international humanitarian law and the norms of international human rights law. This relationship ensures a consistent interpretation of the obligations of states and non-state participants in conflicts. The article also describes the current challenges of applying the principles of international humanitarian law in the context of the transformation of armed conflicts. The problems of implementing these principles in practice are also analyzed. The emphasis is placed on the fact that the principles of international humanitarian law retain universal and primary importance at the present stage, and their strict adherence is a necessary condition for ensuring the legitimacy of the actions of the parties to the conflict, preventing excessive suffering of the civilian population and humanizing armed actions. The article emphasizes the importance of strengthening the mechanisms for implementing these principles in the national legislation of Ukraine, especially in the context of the ongoing armed aggression of the Russian Federation on the territory of our state, the ongoing operation of the legal regime of martial law, as well as the need to improve international control over their observance, update the mechanisms for implementing these principles at the national level and strengthen international control over their observance. These issues are extremely relevant for our present.

  • Research Article
  • 10.1093/chinesejil/jmaf005
Autonomous Weapon Systems and Autonomous Cyber Weapons: Convergence in respect of Concepts, Features, Scope, and Implications on International Law
  • Feb 10, 2025
  • Chinese Journal of International Law
  • Jiawei Chu

Autonomous weapon systems, which can identify, select, and attack targets without human intervention, share key characteristics with autonomous cyber weapons. Both pose similar challenges for defining “armed conflict” and the application of fundamental principles of international humanitarian law, such as principles of distinction and proportionality. While discussions on these systems occur on different platforms, autonomous weapon systems currently receive more focused attention. The question of whether future legal frameworks governing autonomous weapon systems also apply to autonomous cyber weapons, or vice versa, cannot be overlooked. This article explores the emerging evidence of convergence between autonomous weapon systems and autonomous cyber weapons, arguing that international society should systematically address the existing ambiguities surrounding these two types of weapons and explore the potential for their integrated regulation under international law.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.5539/jpl.v13n2p115
General Legal Limits of the Application of the Lethal Autonomous Weapons Systems within the Purview of International Humanitarian Law
  • May 25, 2020
  • Journal of Politics and Law
  • Roman Dremliuga

This article focuses on the problem of regulation of the application of the autonomous weapons systems from the perspective of the norms and principles of international humanitarian law. The article discusses the question of what restrictions are imposed on the application of such weapons in the international humanitarian law. The article presents a number of principles that must be met by both the weapons and their method of their application: distinction between civilians and combatants, military necessity, proportionality, prohibition on causing unnecessary suffering, and humanity.
 
 The author concludes that from the perspective of the principles of the international humanitarian law, it is doubtful if autonomous systems would be able to comply with these principles. Weapons that hit targets without human intervention have been applied for a long time, but they have never had the independence that they have now. The issue of compliance of autonomous weapons systems with the international humanitarian law can be considered if sufficient experience of application of such weapons in real conditions is accumulated. 
 
 This study demonstrates that it is impossible to say that autonomous weapons systems do not comply with the principles of humanitarian law in general. The paper provides policy recommendations and assessments for each of the principles under consideration. The author also concludes that it would be necessary not to prohibit autonomous weapons, because they do not comply with the principles of international humanitarian law, but to develop rules for their application and for human participation in their functioning. A significant challenge to the development of such rules is the opacity of these autonomous weapons systems, if we look at them as at the complex intelligent computer systems.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1163/9789004229495_016
Conclusions International Humanitarian Law and the Challenges of the Changing Technology of War
  • Jan 1, 2013
  • Dan Saxon

This chapter explores how new technologies challenge the relationship between military thinking and international humanitarian law (IHL). It examines the ways in which IHL can and should impact the development and use of new autonomous weapons systems, cyber weapons, communications and data-analysis technology as well as so-called 'non-lethal' weapons. Recent advances in military technology - in particular the increased use of precision weapons - in accordance with the IHL principles of distinction and proportionality, has led to enhanced compliance with law. The military advantages created for States and organized armed groups in possession of such technologies and capabilities will likely lead, inexorably, to the increasing use of autonomous weapons systems, regardless of their contradictions with IHL. Military professionals and IHL specialists should begin to discuss how and when the law should permit States to use such autonomous platforms during armed conflict. Keywords:autonomous weapons systems; cyber weapons; data-analysis technology; international humanitarian law (IHL); proportionality

  • Research Article
  • Cite Count Icon 12
  • 10.2139/ssrn.2755211
Accountability Gap, Autonomous Weapon Systems and Modes of Responsibility in International Law
  • Mar 31, 2016
  • SSRN Electronic Journal
  • Thompson Chengeta

Accountability Gap, Autonomous Weapon Systems and Modes of Responsibility in International Law

  • Research Article
  • Cite Count Icon 3
  • 10.14710/lr.v19i2.58497
Assessing the Legality of Autonomous Weapon Systems: An In-depth Examination of International Humanitarian Law Principles
  • Feb 5, 2024
  • LAW REFORM
  • Ahmad Khalil + 1 more

The use of autonomous weapons systems (AWS) to select targets and attack them without human intervention poses a real legal dilemma. What heralds the urgency of the issue is the emergence of some unofficial reports talking about AWS entering the battlefield in recent armed conflicts. Previous literature has been inconclusive on the legitimacy of AWS. This is what prompted us to do this research, which deserves to be investigated in more depth to help reach an international consensus within the international humanitarian law (IHL) framework. The article uses a combination of both doctrinal and non-doctrinal methodology to provide a more comprehensive understanding of the issue. The methodology focuses on analyzing AWS through the perspective of IHL principles because it is the most related law by which the legitimacy of AWS can be assessed. The data collected were secondary and analyzed using quantitative data analysis to shed light on the contradiction between public sentiment and the actual trajectory of AWS development. The results show that military necessity and humanity are two concepts inherent in the true principles of IHL that do not accept measurement or compromise. The article concludes that although artificial intelligence (AI) has not yet reached a threshold that allows reliable deployment of AWS, However, the acceleration of its development indicates that AWS will be able to comply with true IHL principles in the near future.

  • Research Article
  • Cite Count Icon 4
  • 10.2139/ssrn.2290995
Autonomous Weapons Systems: Taking the Human Out of International Humanitarian Law
  • Jul 10, 2013
  • SSRN Electronic Journal
  • James G Foy

Once confined to science fiction, killer robots will soon be a reality. Both the USA and the UK are currently developing weapons systems that could be capable of autonomously targeting and killing enemy combatants. These capabilities are only 25 years away. According to Additional Protocol I to the Geneva Convention and customary international law, weapons systems must be capable of operating within then principles of International Humanitarian Law (IHL). This paper will demonstrate that without significant restrictions on the use of Autonomous Weapons Systems (AWS) or the creation of a new legal framework, the use AWS is problematic. First, there are legitimate concerns that AWS are, by their nature, incapable of adhering to IHL principles. Second, there is a more fundamental problem: the principles of IHL are actually insufficient to address the unique concerns regarding AWS. Finally, the solutions proposed by proponents of AWS do not sufficiently address these concerns. A legal solution beyond the general principles of IHL must be developed.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.