Abstract

The article is a contribution to the ethical discussion of autonomous lethal weapons. The emergence of military robots acting independently on the battlefield is seen as an inevitable stage in the development of modern warfare because they will provide a critical advantage to an army. Even though there are already some social movements calling for a ban on “killer robots,” ethical arguments in favor of developing those technologies also exist. In particular, the utilitarian tradition may find that military robots are ethically permissible if “non-human combat” would minimize the number of human victims. A deontological analysis for its part might suggest that ethics is impossible without an ethical subject. Immanuel Kant’s ethical philosophy would accommodate the intuition that there is a significant difference between a situation in which a person makes a decision to kill another person and a situation in which a machine makes such a decision. Like animals, robots become borderline agents on the edges of “moral communities.” Using the discussion of animal rights, we see how Kant’s ethics operates with non-human agents. The key problem in the use of autonomous weapons is the transformation of war and the unpredictable risks associated with blurring the distinction between war and police work. The hypothesis of the article is that robots would not need to kill anyone to defeat the enemy. If no one dies in a war, then there is no reason not to extend its operations to non-combatants or to sue for peace. The analysis presented by utilitarianism overlooks the possibility of such consequences. The main problem of autonomous lethal weapons is their autonomy and not their potential to be lethal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call