Abstract

The removal of direct human involvement from the decision to apply lethal force is at the core of the controversy surrounding autonomous weapon systems, as well as broader applications of artificial intelligence and related technologies to warfare. Far from purely a technical question of whether it is possible to remove soldiers from the ‘pointy end’ of combat, the emergence of autonomous weapon systems raises a range of serious ethical, legal, and practical challenges that remain largely unresolved by the international community. The international community has seized on the concept of ‘meaningful human control’. Meeting this standard will require doctrinal and operational, as well as technical, responses at the design stage. This paper focuses on the latter, considering how value sensitive design could assist in ensuring that autonomous systems remain under the meaningful control of humans. However, this article will also challenge the tendency to assume a universalist perspective when discussing value sensitive design. By drawing on previously unpublished quantitative data, this paper will critically examine how perspectives of key ethical considerations, including conceptions of meaningful human control, differ among policymakers and scholars in the Asia Pacific. Based on this analysis, this paper calls for the development of a more culturally inclusive form of value sensitive design and puts forward the basis of an empirically-based normative framework for guiding designers of autonomous systems.

Highlights

  • The removal of direct human involvement from the decision to use lethal force raises a number of serious ethical and legal barriers that remain largely unresolved by the international community

  • This paper focuses on presenting a consideration of how the value sensitive design methodology could assist policymakers, military planners, researchers and manufacturers in their efforts to develop increasingly autonomous weapon systems and military applications of artificial intelligence that remain ethically sound and under the meaningful control of humans

  • The stakeholders associated with autonomous weapon systems present a particular challenge to the civilian researcher, as AWS are an innovation that, while reliant on dual-use technologies, remains largely under the purview of military laboratories, being designed principally for use by those in uniform, which can be an extremely difficult group to receive authorisation to formally interview, especially across multiple states

Read more

Summary

Introduction

The removal of direct human involvement from the decision to use lethal force raises a number of serious ethical and legal barriers that remain largely unresolved by the international community. Central to ensuring continuing accountability for these systems is maintaining meaningful human control, either through doctrinal tools or through deliberate design decisions. Speaking, maintaining meaningful human control over future autonomous systems will require responses that can be grouped into two general categories. Among the earliest suggested technical responses specific to autonomous weapons was Arkin’s ethical governor model, which remains a useful example of the type of solution under this latter category. These categories group responses on the basis of whether they achieve an ethical outcome through shaping human behaviour and perceptions of the system, or by ‘baking in’ factors that promote ethical behaviour by the system. The concept of values in design is firmly in the latter category, in that it focuses on proactively embedding human values and ethical standards into the design of systems

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call