Abstract

While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ‘no’ and offer several reasons for banning autonomous robots. (1) Such a robot treats a human as an object, instead of as a person with inherent dignity. (2) A machine can only mimic moral actions, it cannot be moral. (3) A machine run by a program has no human emotions, no feelings about the seriousness of killing a human. (4) Using such a robot would be a violation of military honor. We therefore conclude that the use of an autonomous robot in lethal operations should be banned.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call