Abstract
Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.
Highlights
Debates on lethal autonomous weapon systems have proliferated in the past 5 years
Meaningful Human Control over Autonomous Systems human control and intervention? What if armed drones, after being programmed and activated, could select and engage targets without further human intervention and civilians are mistakenly killed in an attack? What if—as happened in 2016—a driver of a car in autonomous mode is killed in a crash, because of the fact that a large white truck in front of the car is misclassified by the system as piece of the sky?
In the legal–political debate on autonomous weapon systems of the past few years, these ethical concerns have been synthesized in the following principle: Principle of meaningful human control future weapons systems must preserve meaningful human control over the use of force, that is: humans not computers and their algorithms should remain in control of, and morally responsible for relevant decisions about military operations. (Article 36, 2015)
Summary
As a result of rapid and impressive developments in sensor technology, AI and machine learning, robotics and mechanical engineering, mechatronics, and systems with various degrees of autonomy will be available on a large scale in the coming years These autonomous and semiautonomous systems are able to achieve goals and perform tasks without much intervention and control by human beings. (b) As a matter of principle, it is morally wrong to let a machine be in control of the life and death of a human being, no matter how technologically advanced the machine is (Wagner, 2014) According to this position, which has been stated among others by The Holy See (Tomasi, 2013), these applications are mala in se (Wallach, 2013). According to this position, which has been stated among others by The Holy See (Tomasi, 2013), these applications are mala in se (Wallach, 2013). (c) In the case of war crimes or fatal accidents, the presence of an autonomous weapon system in the operation may make it more difficult, or impossible altogether, to hold military personnel morally and legally responsible
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have