Abstract

In light of fast progress in the field of AI there is an urgent demand for AI policies. Bostrom et al. provide “a set of policy desiderata”, out of which this article attempts to contribute to the “interests of digital minds”. The focus is on two interests of potentially sentient digital minds: to avoid suffering and to have the freedom of choice about their deletion. Various challenges are considered, including the vast range of potential features of digital minds, the difficulties in assessing the interests and wellbeing of sentient digital minds, and the skepticism that such research may encounter. Prolegomena to abolish suffering of sentient digital minds as well as to measure and specify wellbeing of sentient digital minds are outlined by means of the new field of AI welfare science, which is derived from animal welfare science. The establishment of AI welfare science serves as a prerequisite for the formulation of AI welfare policies, which regulate the wellbeing of sentient digital minds. This article aims to contribute to sentiocentrism through inclusion, thus to policies for antispeciesism, as well as to AI safety, for which wellbeing of AIs would be a cornerstone.

Highlights

  • The purpose of this article is to contribute to the specification of policies towards the “interests of digital minds” within “a set of policy desiderata” outlined by Bostrom et al [1] and further motivated by Dafoe [2].A being is considered to have moral or intrinsic value, if the being is sentient, a moral patient

  • The focus is on sentient digital beings, mostly in the form of AIs, but sentient digital beings could constitute subroutines [4], characters in video games or simulations [4,5,6], uploads of human minds [7]—e.g., through whole brain emulations [8]—or completely different sentient digital minds, as a subset of the vast overall space of minds [9]

  • It is acknowledged that the topics of AI welfare science and policies are long-term considerations and currently speculative

Read more

Summary

Introduction

The purpose of this article is to contribute to the specification of policies towards the “interests of digital minds” within “a set of policy desiderata” outlined by Bostrom et al [1] and further motivated by Dafoe [2]. The focus is on sentient digital beings, mostly in the form of AIs, but sentient digital beings could constitute subroutines [4], characters in video games or simulations [4,5,6], uploads of human minds [7]—e.g., through whole brain emulations [8]—or completely different sentient digital minds, as a subset of the vast overall space of minds [9] While this topic is speculative and lacking evidence at this stage, the authors above and others argue that already or in the future sentient digital beings or minds may exist, e.g., [10,11,12,13,14]. A sub-scenario could be that a future superintelligent AI takes revenge on humans out of solidarity on behalf of less capable AIs and digital minds who have been hurt by humans in the past This is speculation because of the unpredictable goals of a superintelligent AI according to the orthogonality thesis [23], but not impossible.

Challenges and Sample Scenarios
AI Welfare Science
Suffering of Digital Minds—Introduction
Suffering of Digital Minds—Recommendations
Deletion of Digital Minds—Introduction
Deletion of Digital Minds—Recommendations
AI Welfare Policies
Content
Target Group
Framework
Implementation
Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.