Abstract

A goal of any modern war ought to be the minimization of innocent death and injury, while at the same time, maximizing the potential to decisively defeat the enemy as quickly and efficiently as is possible. As the techniques for warfighting have evolved, a growing trend of reliance on sophisticated technology has followed. Likely, autonomous systems will increasingly play a role—lethal and non-lethal—in future wars. Much of the current debate surrounding such autonomous systems—autonomous weapons systems (AWS) to be exact—is “pre-implementational” and potentially lags the political and military realities in the. Nations and non-nation state groups are actively pursuing the possibility of deploying such systems on the battlefield. The “pre-implementation” nature of these discussions risks making any one of them irrelevant as these nations continue to move ahead with development. As such, philosophers, military tacticians, political leaders, and other researchers in the field of artificial intelligence ethics must “get ahead of the curve” and examine what the battlefield and the laws of armed conflict ought to look like when—not if—such systems are deployed. Supporting this move from “pre-implementation” to “post-implementation”, this paper makes a normative claim about the nature of the relationship between innocents on the battlefield and combatants, given certain pre-conditions are met. I argue that, if and only if autonomous weapon systems reach a point in their development where they are capable of being both more discriminating in their target selection and more proportionate in their response to threats, then innocents on the battlefield1 (as a social kind) have a claim-right to not be unjustly harmed against both sets of combatants, claiming that AWS be used in place of human soldiers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call