Abstract

Introduction There is an ongoing technological transformation in warfare with ever more control of weapons being delegated to computer systems. There is considerable international concern among states and civil society about where humans fit into the control loop. Rather than move to a point where computer programs control the weapons, it is proposed in this chapter that the right balance between the best of human abilities and the best of computer functionality will have significantly greater humanitarian impact. The psychological literature on human decision provides a foundation for the type of control required for weapons. A human control classification is provided that reframes autonomy/semi-autonomy in terms of levels of supervisory control. This allows for greater transparency in command and control and the allocation of responsibility. There is considerable, and increasing, international discussion and debate about whether or not we should allow the decision to kill a human to be delegated to autonomous weapons systems (AWS) – systems that, once activated, can track, identify and attack targets with violent force without further human intervention. The discussion has ranged from moral and legal implications, to technical and operational concerns, to issues about international security. It seems clear that for the foreseeable future, we cannot guarantee that AWS will be able to fully comply with international humanitarian law (IHL), except perhaps in some very narrowly subscribed circumstances. Apart from problems with the principles of distinction and proportionality in determining the legitimacy of targets, AWS are, by definition, less predictable than other weapons systems. This means that it is unclear as yet how we could guarantee the quality of Article 36 weapon reviews for both hi-tech and lo-tech nations. In addition, the US Department of Defense has pointed out a number of computer problems for the use of AWS. Some argue that such weapons could be used legally in certain very limited circumstances, while others maintain that at some point in the future they may be able to comply with IHL. However, these arguments are about an IHL-compliant technology that no one yet knows how to create. There is nothing wrong with technological ambitions or a general research agenda in civilian domains, but there is less room for such conjecture when discussing autonomous technologies of violence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call