Abstract

Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufficiently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustified retributive intuitions in cases of robot harm.

Highlights

  • Our lives are increasingly affected by robotization

  • If I am right—if we ought to discount retributive intuitions in cases of robot harm without targets for moral blame— this might seem like an uphill battle, insofar as intuitions are automatic, kneejerk responses beyond conscious control (e.g., Greene 2008; Haidt 2001)

  • I have argued that the retributivist intuitions underlying the retribution gap are properly understood as deontological intuitions, and I have applied a debunking argument to show that retributive intuitions cannot be used to justify retribution in cases of robot harm without eligible targets for moral blame

Read more

Summary

Introduction

Our lives are increasingly affected by robotization. Recent developments in robotics and machine learning are initiating a “new generation of systems that rival or exceed human capabilities” (Kaplan 2015, 3). I contend that the most pressing and morally significant gap arises between retributive intuitions and what one ought to do with them, rather than between those intuitions and the unsuccessful attempt to find appropriate targets for blame in the case of robot wrongdoings.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call