Abstract

ABSTRACT Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor, ranging from claims that LAWS will be more effective to arguments that they will be more moral warfighters than flesh-and-blood soldiers. However, the arguments only point in favor of autonomous weapons systems, failing to demonstrate why such systems should be lethal. In this paper I argue that if one grants the proponents' points in favor of LAWS, then, contrary to what might be expected, this leads to the conclusion that it would be both immoral and illegal to deploy lethal autonomous weapons, because the features that speak in favor of LAWS also undermine the need for them to be programmed to take lives. In particular, I argue that such systems, if lethal, would violate the moral and legal principle of necessity, which forbids the use of weapons that impose superfluous injury or unnecessary harm. I conclude by highlighting that the argument is not against autonomous weapons per se, but only against lethal autonomous weapons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call