Abstract

This Chapter offers a framework for analyzing the intersection of artificial intelligence (AI), ethics, and law. It does so, as Part 1 explains, by (1) suggesting potential limitations on AI consciousness and identifying the implications of those limitations for ethics and law, and (2) acknowledging three possible philosophical objections to this line of analysis and providing reasons to reject each of them. Part 2 explores the role that consciousness plays in forming objectives, including in making relevant moral and other value judgments. It suggests that as long as AI lacks consciousness we will have difficulty regulating it — our ethics and law often rely on intent in assigning, respectively, moral responsibility and legal liability — and difficulty using it to regulate ourselves — value judgments play an important role in resolving ethical and legal disputes. It also notes that conscious AI is likely to have very different first-person experiences than we do — and hence very different forms of intent and values than we have — giving rise to another set of difficulties for regulating it and for it regulating us. With this sketch of an argument in place, Part 3 addresses three potential philosophical objections to it: that consciousness as a matter of theory cannot have the sort of causal effect on behavior that Part 2 presumes; that consciousness as a matter of empirical fact does not have that sort of causal effect; and that Part 2 relies on a dubious understanding of free will. Part 3 contends that none of these objections is persuasive. The Chapter concludes that the analysis in Part 2 warrants further development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call