Abstract

Artificial intelligence in military operations comes in two kinds. First, there is narrow or specific intelligence – the autonomous ability to identify an instance of a species of target, and to track its changes of position. Second, there is broad or general intelligence – the autonomous ability to choose a species of target, identify instances, track their movements, decide when to strike them, learn from errors, and improve initial choices. These two kinds of artificial intelligence raise ethical questions mainly because of two features: the physical distance they put between the human agents deploying them and their targets, and their ability to act independently of those agents. The main ethical questions these features raise are three. First, how to maintain the traditional martial virtues of fortitude and chivalry while operating lethal weapons at a safe distance? Second, how much autonomy to grant a machine? And third, what risks to take with the possibility of technical error? This paper considers each of these questions in turn.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call