Abstract

As artificial intelligence (AI) becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that our Strawsonian approach is either the only one worthy of consideration or the obviously correct approach, but we think it is preferable to trying to marry fundamentally different ideas of moral responsibility (i.e. one for AI, one for humans) into a single cohesive account. Under a Strawsonian framework, people are morally responsible when they are appropriately subject to a particular set of attitudes—reactive attitudes—and determine under what conditions it might be appropriate to subject machines to this same set of attitudes. Although the Strawsonian account traditionally applies to individual humans, it is plausible that entities that are not individual humans but possess these attitudes are candidates for moral responsibility under a Strawsonian framework. We conclude that weak AI is never morally responsible, while a strong AI with the right emotional capacities may be morally responsible.

Highlights

  • Dan Brown’s, 2017 novel, Origin, centers on the mysterious the assassination of a tech billionaire, Edmund Kirsch, streamed live online

  • We agree with Matthias (2004) that “black box” weak artificial intelligence (AI) falls into a “responsibility gap”, and we provide a further argument that strong AI and only strong AI can be morally responsible, according to a Strawsonian account

  • We realize that any AI that meets the criterion we suggest is necessary for this ascription of moral responsibility— namely Strong AI—is quite distant

Read more

Summary

Introduction

Dan Brown’s, 2017 novel, Origin, centers on the mysterious the assassination of a tech billionaire, Edmund Kirsch, streamed live online. Winston determines that the public assassination is the most efficient and effective way to ensure that the Kirsch’s major announcement goes viral and has the desired public impact. While no artificial intelligence (hereafter AI) as sophisticated as Winston yet exists, as AI becomes ubiquitous, AI systems are increasingly involved in and create novel, morally charged situations. Autonomous cars may shift moral responsibility away from the human occupants of vehicles; if vehicles are not responsible, it is unclear to whom responsibility should be attributed. This problem becomes pressing in cases where human creators of the machine cannot predict or explain it’s actions (and there is no clear case of negligence). An area of concern for artificial intelligence and machine ethics is understanding what it means for a machine to be morally responsible

Objectives
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call