Abstract

ABSTRACT The integration of AI in resort-to-force decision making gives rise to substantial threats and problems. One significant challenge is that incorporating a machine into the decision-making process can result in responsibility gaps for decisions informed or made by the machine. Ethically, a situation in which lethal violence can be employed without a responsible subject to blame for any wrongdoing is unacceptable. But how can responsibility be attributed if a machine is involved in the decision-making process on war and peace? To address this question, I introduce the concept of ‘proxy responsibility’. I contend that since we cannot ascribe moral responsibility to the AI artefact itself, we must identify responsibility relations in the structures in which AI decision making operates. A dynamic and contextual concept of responsibility positions AI in the broader decision-making process of the political, military, and economic system, and helps to unfold different responsibility layers among the involved actors. I argue that the more we move in the direction of machine autonomy, the denser the web of proxy responsibility relations in the environment of AI must become to address the aforementioned gaps in responsibility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call