Accountability and Liability for the Deployment of Autonomous Weapon Systems
Chapter 5 analyses the questions of accountability, namely whether or not AWS will give place to an ‘accountability gap’. Its first part aims to explain that, from a legal point of view, what is relevant is who (human operators) or what (AWS) guarantees a better compliance with IHL. The second part looks at eventual violations of IHL caused by AWS. Three types of situations are distinguished: hardware malfunctions, accidents (violations caused by human fault) and errors (violations caused by the systems‘ software). In this regard, special attention is given to the category of ‘dolus eventualis’ for ‘accidents‘ as a level of guilt that should be included in the ICC Statute for some specific situations of individual accountability. In the case of ‘errors’- IHL violations caused by the software system alone - they cannot be attributed to any human operator but the deploying state alone. Lastly, in the third part, it is explained why machine learning algorithms introduce specific challenges in terms of evidence due to their black-box nature. In this regard, it is argued that those algorithms should be able to provide ‘factual algorithms’, that is, be able to provide information about the fundamental facts that the algorithm considered in its selection-making process.
- Research Article
11
- 10.1109/mts.2019.2948439
- Dec 1, 2019
- IEEE Technology and Society Magazine
In the political debate on Autonomous Weapon Systems strong views and opinions are voiced, but empirical research to support these opinions is lacking. Insight into which moral values are related to the deployment of Autonomous Weapon Systems is missing. We describe the empirical results of two studies on moral values regarding Autonomous Weapon Systems that aim to understand the perception of people pertaining to the introduction of Autonomous Weapon Systems. One study consists of a sample of military personnel of the Dutch Ministry of Defense and the second study contains a sample of civilians. The results indicate both groups are more anxious about the deployment of Autonomous Weapon Systems than about the deployment of Human Operated drones, and that they perceive Autonomous Weapon Systems to have less respect for the dignity of human life. The concerns for Autonomous Weapon Systems creating new kinds of psychological and moral harm is very present in the public debate, and this is in our opinion one element that deserves to be carefully considered in future debates on the ethics of the design and deployment of Autonomous Weapon Systems. The results of these studies reveal a common ground regarding the moral values of human dignity and anxiety pertaining the introduction of Autonomous Weapon Systems which could further the ethical debate.
- Research Article
61
- 10.1007/s11023-020-09532-9
- Aug 1, 2020
- Minds and Machines
Accountability and responsibility are key concepts in the academic and societal debate on Autonomous Weapon Systems, but these notions are often used as high-level overarching constructs and are not operationalised to be useful in practice. “Meaningful Human Control” is often mentioned as a requirement for the deployment of Autonomous Weapon Systems, but a common definition of what this notion means in practice, and a clear understanding of its relation with responsibility and accountability is also lacking. In this paper, we present a definition of these concepts and describe the relations between accountability, responsibility, control and oversight in order to show how these notions are distinct but also connected. We focus on accountability as a particular form of responsibility—the obligation to explain one’s action to a forum—and we present three ways in which the introduction of Autonomous Weapon Systems may create “accountability gaps”. We propose a Framework for Comprehensive Human Oversight based on an engineering, socio-technical and governance perspective on control. Our main claim is that combining the control mechanisms at technical, socio-technical and governance levels will lead to comprehensive human oversight over Autonomous Weapon Systems which may ensure solid controllability and accountability for the behaviour of Autonomous Weapon Systems. Finally, we give an overview of the military control instruments that are currently used in the Netherlands and show the applicability of the comprehensive human oversight Framework to Autonomous Weapon Systems. Our analysis reveals two main gaps in the current control mechanisms as applied to Autonomous Weapon Systems. We have identified three first options as future work for the design of a control mechanism, one in the technological layer, one in the socio-technical layer and one the governance layer, in order to achieve comprehensive human oversight and ensure accountability over Autonomous Weapon Systems.
- Research Article
10
- 10.1017/s0892679416000277
- Jan 1, 2016
- Ethics & International Affairs
Robert Sparrow recently argued in this journal that several initially plausible arguments in favor of the deployment of autonomous weapon systems (AWS) in warfare are in fact flawed, and that the deployment of AWS faces a serious moral objection. Sparrow's argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they either fail to transmit an attitude of respect for enemy combatants or, worse, they transmit an attitude of disrespect. In this reply we argue that this distinction between AWS and widely accepted weapons is illusory, and therefore cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that if deploying conventional soldiers in a given situation would be permissible, but we could expect to cause fewer civilian casualties by instead deploying AWS, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation.
- Book Chapter
3
- 10.1017/cbo9781316597873.012
- Dec 31, 1920
Uncertainty and its problems The debate concerning the law, ethics and policy of autonomous weapons systems (AWS) remains at an early stage, but one of the consistent emergent themes is that of uncertainty. Uncertainty presents itself as a problem in several different registers: first, there is the conceptual uncertainty surrounding how to define and debate the nature of autonomy in AWS. Contributions to this volume from roboticists, sociologists of science and philosophers of science demonstrate that within and without the field of computer science, no stable consensus exists concerning the meaning of autonomy or of autonomy in weapons systems. Indeed, a review of definitions invoked during a recent expert meeting convened by states parties to the Convention on Certain Conventional Weapons shows substantially different definitions in use among military experts, computer scientists and international humanitarian lawyers. At stake in the debate over definitions are regulatory preoccupations and negotiating postures over a potential pre-emptive ban. A weapons system capable of identifying, tracking and firing on a target without human intervention, and in a manner consistent with the humanitarian law obligations of precaution, proportionality and distinction, is a fantastic ideal type. Defining AWS in such a way truncates the regulatory issues to a simple question of whether such a system is somehow inconsistent with human dignity – a question about which states, ethicists and lawyers can be expected to reasonably disagree. However, the definition formulated in this manner also begs important legal questions in respect of the design, development and deployment of AWS. Defining autonomous weapons in terms of this pure type reduces almost all questions of legality to questions of technological capacity, to which a humanitarian lawyer's response can only be: ‘If what the programmers and engineers claim is true, then … ’ The temporally prior question of whether international law generally, and international humanitarian law (IHL) in particular, prescribes any standards or processes that should be applied to the design, testing, verification and authorization of the use of AWS is not addressed. Yet these ex ante considerations are urgently in need of legal analysis and may prove to generate deeper legal problems.
- Research Article
7
- 10.1163/18781527-01001010
- Jun 9, 2019
- Journal of International Humanitarian Legal Studies
The legal debate surrounding the development and deployment of autonomous weapons systems (aws) has stagnated in recent years, having arguably hit the hard limits of legal doctrine. At the heart of this impasse lies the focus upon autonomy as both the innovative and defining feature of aws. Thus, the autonomy of the weapons system places it in a legally liminal zone between agent and object, revealing a set of legal problems that revolve around issues of control, influence, responsibility and liability, and questions of legal compliance that follow from the prospect of autonomous lethal decision-making. This paper seeks to explore alternative framings to the same underlying technology as a means of escaping the limits imposed by the autonomy framework that has dominated the debate to date, and to examine the consequences that flow from pursuing these approaches from legal and regulatory perspectives. In particular, emphasis is placed upon the networks approach, and the systems approach, which this paper sets out and differentiates from the orthodox emphasis upon autonomy. These alternative approaches suggest that the legal problems arising from the autonomy framing are the easiest set of issues to address, insofar as these frame legal problems, while the networks and systems approaches seem to touch upon legal mysteries to which no ready legal or regulatory responses can be made. Rather than dismiss the network and systems approaches, however, this paper suggests that appropriate, adequate and robust legal and regulatory responses must consider the insights and challenges that these approaches pose, and that pursuing these approaches will lead to powerful converging arguments supporting a moratorium on the deployment of aws.
- Research Article
- 10.36128/priw.vi50.774
- Aug 22, 2024
- LAW & SOCIAL BONDS
This paper addresses the problem of the need to determine the legality of autonomous weapon systems (AWS) under international humanitarian law (IHL), focusing on the two targeting and weapons laws. This study emphasizes the need not to confuse these two laws in the analysis. The paper aims to clarify whether AWS could be considered illegal under IHL, taking into account the principles of distinction, proportionality, and precaution. The research methodology includes an analysis of the relevant provisions of IHL and customary humanitarian law. The research design includes an examination of the potential of AWS to cause unnecessary injury or suffering and their classification as indiscriminate weapons. The paper concludes that while AWS posses autonomous decision-making capabilities, human oversight is required to prevent excessive harm.
- Book Chapter
- 10.3233/nhsdp220008
- Sep 21, 2022
The reemergence of the geostrategic competition among others is highly focused on the development of autonomous technology and deployment of autonomous weapons systems. While technological development has always had a profound impact on international law, the structural limitations of the international regulatory regime limit international law’s ability to provide any course on these emerging technologies’ development. Striving to maintain competitive advantage some states have started serious programs in developing autonomous technologies. Moreover, they have launched strategic documents broadcasting their ambitions to deploy autonomous weapons systems. These and other developments have instigated vigorous legal debate ranging from calls to a complete ban of these systems to full support.
- Book Chapter
1
- 10.1007/978-3-030-43890-6_30
- Jan 1, 2020
The text deals with the psychological and ethical aspects of using autonomous weapons. It focuses on controversies associated with the contemporary use of robotic weapons, respectively unmanned weapon systems and the possible use of autonomous weapons in future armed conflicts led by state and non-state actors. These means can achieve significant success at the tactical level while minimizing their own human loss or even the complete absence of their own human element at the point of projection of military force. However, their use may, on the other hand, be in direct contradiction with the long-term strategic objectives of their user and partially delegitimize his intentions. War, as a complex phenomenon, is not limited to direct combat activity, and in relation to a number of non-military factors, the use of autonomous weapons can be problematic from both ethical and psychological points of view. Thus, the military and technological superiority of one party may be partially offset in some conflicts by the ideological superiority of the weaker adversary. The text tries to characterize the main controversies that the deployment of autonomous weapon systems can represent in this respect.
- Research Article
- 10.1080/13642987.2025.2594775
- Dec 9, 2025
- The International Journal of Human Rights
The rise of autonomous weapon systems is a fact. These systems have all the potential to transform armed conflicts. They dehumanise warfare and take the human out of any loop in the decision-making cycle of any given battle. This article tries to address some of the probable outcomes of this drastic transformation. One of them will be the reduction of the efficacy of international criminal law. This comes to mean nothing but the elimination of the deterrent effect of international criminal law. This elimination will be highlighted by a short legal scrutiny making use of the elements of a specific crime as regulated in the Rome Statute. The deployment of autonomous weapon systems will distort future functioning of the international criminal system due to problems they will create in relation to attribution, conduct and mens rea.
- Book Chapter
2
- 10.3366/edinburgh/9781474483575.003.0010
- Jan 19, 2021
This chapter explores ethical challenges potentially arising from AI-controlled drones, focusing on how their use might be restrained through international legal regulation. The starting point is the 2013 recommendation of a moratorium on the production of lethal autonomous weapon systems (LAWS) to the United Nations (UN) Human Rights Council by its Special Rapporteur on extrajudicial, summary or arbitrary executions. The response by UN member states to this recommendation was to resolve that relevant discussions should occur within the framework of the UN Convention on Certain Conventional Weapons (CCW). However, the critical problem identified in this chapter is that the introduction of CCW-based regulation requires consensus among all the treaty’s members. Thus, to achieve principled and legally-binding restraints on the use of autonomous armed drones, scholars and policy practitioners need to confront a set of challenges to multilateral consensus. These challenges include: threats to multilateralism in arms-control generally; ongoing concerns about a military AI arms race; anti-activist sentiments and ‘banphobia’ among arms-control diplomats; and differing international understandings of what moral values are applicable to the deployment of autonomous weapons systems.
- Research Article
1
- 10.1163/18757413_02401009
- Dec 17, 2021
- Max Planck Yearbook of United Nations Law Online
The deployment of autonomous weapon systems (‘aws’) in military operations is a major concern for the international community. While the military in particular argues that increased autonomy mitigates the risks of death and injury for civilians, scientists and civil society emphasise that humans still have to play a decisive role when increasingly outsourcing core competences to machines. In 2017, a Group of Governmental Experts (‘gge’) met for the first time to discuss legal and ethical issues of aws. The centrepiece of the debate was (and still is) the role humans play in an increasingly automated battlefield. In the past couple of years, consensus has been emerging on the necessity of maintaining human control. It will be argued in this contribution that human control (in abstracto) is not merely a political demand. It is also a legal obligation derived from international humanitarian law (‘ihl’). This contribution seeks to analyse relevant rules of ihl with a specific focus on the legal nature of the Martens Clause arguing that human control (in abstracto) is not only politically desired but that it is legally required. That being said, the various concepts aiming to implement and operationalise human control (in concreto) will be examined with a view to offering solutions for a potential regulatory framework on aws at the United Nations (‘UN’) in Geneva.
- Book Chapter
13
- 10.1017/cbo9781316597873.007
- Dec 31, 1920
Critics of autonomous weapons systems (AWS) claim that they are both inherently unethical and unlawful under current international humanitarian law (IHL). They are unethical, it is said, because they necessarily preclude making any agent fairly accountable for the wrongful effects of AWS, and because allowing machines to make life or death decisions seriously undermines human dignity: only moral beings should make such decisions and only after careful moral deliberation, for which they could be held accountable. AWS are inherently unlawful, critics say, because they cannot possibly comply with the core IHL principles of discrimination and proportionality. Contrary to these critics, I argue in this chapter that AWS can conceivably be developed and deployed in ways that are compatible with IHL and do not preclude the fair attribution of responsibility, even criminal liability, in human agents. While IHL may significantly limit the ways in which AWS can be permissibly used, IHL is flexible and conventional enough to allow for the development and deployment of AWS in some suitably accountable form. Having indicated how AWS may be compatible with IHL and fair accountability, I turn to a serious worry that has been largely neglected in the normative literature on AWS. The development of AWS would deepen the already ongoing and very troubling dynamics of asymmetrical and so-called riskless warfare. While IHL-compatible AWS could be developed, in principle, and agents in charge of designing, testing and deploying AWS could be held accountable for wrongful harms, there are troublingly few incentives to duly control and minimize the risks to foreign civilians in the contexts of asymmetrical warfare. The most troubling aspects of AWS, I suggest, are not matters of deep ethical or legal principle but, rather, the lack of incentives for implementing effective regulations and accountability. The main goal of this chapter is to articulate this distinct worry and emphasize how serious it is. Once this is appreciated, it will be clear that more attention needs to be paid to determining what conditions would allow for the effective oversight of AWS development, testing and eventual use. Such oversight may be accomplished partly by defining liability criteria for agents working within the industrial and organizational complex behind AWS design, production and use. Ultimately, however, public scrutiny may be the only available effective push for IHL compliance and accountability.
- Research Article
9
- 10.2139/ssrn.2754995
- Mar 27, 2016
- SSRN Electronic Journal
The emerging notion of ‘Meaningful Human Control’ (MHC) was suggested by NGO Article 36 as a possible solution to the challenges that are posed by Autonomous Weapon Systems (AWS). Various states, NGOs and scholars have welcomed this term. However, the challenge is that MHC is not defined in international law and as of present, there is no literature that extensively or normatively defines it. In this paper, I seek to discuss questions that I consider helpful in defining the MHC. Control that is exercised by humans over weapons they use has been changing in nature and degree. In the beginning, weapons were mere tools in the hands of fighters who exercised direct control. With the invention of technology, there has been considerable automation of control that was previously exercised by humans. The invention of drones has seen remote control of weapons, making it possible for humans to project force while thousands of miles away from the target. On the horizon are AWS, robotic weapons that once activated, do not need any further human intervention. In the case of AWS, humans seem to be ‘surrendering’ or delegating control of weapons to computers. In as much as this may seem convenient, efficient and safe, it raises far reaching concerns. For that reason, many scholars and organisations are insisting that MHC over weapons must be maintained. In order to define MHC, I propose that the international community must ask the following questions: i. What is the purpose of MHC? ii. Who should exercise that MHC over weapons and when? Is it manufacturers, programmers, the individuals who deploy them or all of them? iii. Over what aspects of AWS should one exercise MHC?In answering the above questions, I note that one of the major concerns is that AWS may create a legal responsibility vacuum. For that reason, I suggest that MHC exercised by humans over AWS should be of such a nature that the weapon user is potentially responsible for all ensuing actions of the robots. To define the nature of control that allows responsibility, I consider the international law jurisprudence on the notion of ‘control’ as the basis for responsibility. I point out that such control should be exercised over the ‘critical functions’ of AWS, in particular, those that relate to decision-making. There are already disagreements in the AWS debate as far as what decision-making means. I therefore discuss how that word should be defined as a step towards the definition of MHC.I note there are various actors involved in the development and deployment of AWS. The fundamental question is whether each actor needs to exercise MHC or whether the term should be defined as a cumulative concept – summing up the different roles that are played by designers, roboticists, programmers, manufacturers, states and combatants. I argue that if MHC is meant to be a legal standard upon which the responsibility for use of AWS is determined, then one of the common mistakes among debaters is the attempt to define MHC without a specific actor in mind. The suggestion that the definition of MHC should be a standard focussing on a specific actor is not to imply that there should be only one standard and all other actors should be forgotten. Rather, the term MHC should zero in on each actor, producing separate definitions and standards to which the different actors should adhere to. Because the control that is exercised by the aforementioned actors is subject to different standards, the test for the meaningfulness of the control exercised by each of them ought to be different.
- Research Article
- 10.12775/clr.2025.010
- Dec 9, 2025
- Comparative Law Review
This article advocates the creation of a new criminal offence, Negligent Deployment of Autonomous Weapon Systems, to address the legal and ethical challenges presented by the use of fully AI weapons. This proposed offence seeks to establish clear accountability for individuals and entities responsible for the design, deployment, and operational control of fully AI weapons where their negligence results in unlawful harm or poses a substantial risk of such harm. The existing body of work has primarily focused on the issue of command responsibility and mens rea in relation to military personnel who utilize such systems. The literature has emphasized the need to further explore the question of liability for negligence on the part of manufacturers and developers. This paper seeks to contribute to addressing this legal gap by proposing the introduction of a new criminal offence. Given the importance of this topic for the international community and its potentially far-reaching consequences, the author advocates the harmonization of national criminal laws on this matter and the universal adoption of this or a comparable criminal offence.
- Research Article
- 10.59188/jurnalsosains.v5i6.32268
- Jun 14, 2025
- Jurnal sosial dan sains
The transformation of global conflict architecture over the past decade reflects a significant shift toward non-conventional or asymmetric warfare, characterized by the involvement of non-state actors, guerrilla tactics, cyber infiltration, and the deployment of autonomous weapon systems. This type of warfare has blurred the distinction between combatants and civilians, thereby complicating the application of core principles of International Humanitarian Law (IHL), such as distinction, proportionality, and precaution. This study aims to evaluate the effectiveness and adaptive capacity of IHL in addressing the challenges posed by contemporary asymmetric warfare. The research employs a normative juridical approach using an evaluative-reflective model based on normative gap analysis, complemented by case studies of conflicts in Syria and Yemen. The findings reveal a structural disparity between universal legal norms and operational practices in the field, where violations of IHL principles frequently occur in the absence of effective accountability mechanisms. This study recommends reforming international legal instruments through the integration of adaptive legal principles, the inclusion of non-state actors, and the development of technology-based legal monitoring systems. Such an approach is essential to reinforce IHL systems to be more contextual, dynamic, and responsive to evolving forms of armed violence in the contemporary era.