Abstract

ABSTRACT While the story of Stanislav Petrov – the Soviet Lieutenant Colonel who likely saved the world from nuclear holocaust in 1983 – is often trotted out to advocate for the view that human beings ought to be kept “in the loop” of automated weapons’ responses, I argue that the episode in fact belies this reading. By attending more closely to the features of this event – to Petrov’s professional background, to his familiarity with the warning system, and to his decisions to defy his protocol – it becomes clear that Petrov was not seamlessly working “in his loop,” but reassessing it entirely. I claim that this exhibits a paradox intrinsic to all automated loops: namely, that their optimum function in fact rests on unforeseen human interventions that cannot be reliably codified ex ante, that is, that good judgment cannot simply be “programmed into” their protocols. This dependence, moreover, reveals automation’s ineluctable need for virtue ethics – not in the usual sense (whereby ethics deliberates about, say, possible damages and loss of life), but in the sense that ethical judgment, rightly understood, entails the reassessment of all of the conditions surrounding a decision, including whether a protocol should be followed at all.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.