Abstract

Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to suggest that responsibility gaps should sometimes be welcomed, our argument is novel. Others have argued that responsibility gaps should sometimes be welcomed because they can reduce or eliminate the psychological burdens caused by tragic moral choice-situations. By contrast, our argument explains why responsibility gaps should sometimes be welcomed even in the absence of tragic moral choice-situations, and even in the absence of psychological burdens.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call