Abstract

Introduction The question of ‘killer robots’, or autonomous weapons systems (AWS), has garnered much attention in recent discourse. While officials often downplay the prospect of such systems making targeting and other crucial decisions, their own statements reveal the possibility that such capabilities would be developed in the near future. For instance, in late 2012, the US Department of Defense (DoD) imposed a de facto moratorium on the development of fully autonomous weapons systems, by emphasizing that weapons ‘shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force’. From the mere fact that the DoD felt the need to constrain the development and use of AWS for the time being and in several ways, we can learn that the prospect of such weapons is realistic. Indeed, DoD Directive 3000.09 includes a bypass clause in which deviations from its requirements can be approved by high-ranking officials through special procedures. There is therefore a consensus, among commentators, that the motivation to develop and deploy AWS will eventually overcome these temporary constraints. This makes the discussion of AWS a timely one. When discussing the legality and legitimacy of such weapons, the claim that machines should not be making ‘decisions’ to use lethal force during armed conflict is an intuition shared by many. However, the current discussion as to just why this is so is rather unsatisfying. The ongoing discourse on AWS is comprised of nuanced approaches found between two extremes, arguing against each other roughly along consequentialist and deontological lines. On one side of the spectrum is the approach that if AWS could deliver good results, in terms of the interests protected by international humanitarian law (IHL), there is no reason to ban them. On the contrary, as the argument goes, if we take humanitarian considerations seriously, we should encourage the development and use of such weapons. Of course, proponents of this approach envision technological advancements that would actually make such results possible. Found on the other side of the spectrum are those that claim that even if AWS could, in terms of outcomes, adhere to the basic norms of IHL, their use should still be prohibited, whether on ethical or legal grounds. Usually, those holding this position are also sceptical that technology would ever be able to produce such benevolent systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call