Abstract

As AI advances, models of simulated humans are becoming increasingly realistic. A new debate has arisen about the ethics of interacting with these realistic agents—and in particular, whether any harms arise from ‘mistreatment’ of such agents. In this paper, we advance this debate by discussing a model we have developed (‘BabyX’), which simulates a human infant. The model produces realistic behaviours—and it does so using a schematic model of certain human brain mechanisms. We first consider harms that may arise due to effects on the user—in particular effects on the user’s behaviour towards real babies. We then consider whether there’s any need to consider harms from the ‘perspective’ of the simulated baby. The first topic raises practical ethical questions, many of which are empirical in nature. We argue the potential for harm is real enough to warrant restrictions on the use of BabyX. The second topic raises a very different set of questions in the philosophy of mind. Here, we argue that BabyX’s biologically inspired model of emotions raises important moral questions, and places BabyX in a different category from avatars whose emotional behaviours are ‘faked’ by simple rules. This argument counters John Danaher’s recently proposed ‘moral behaviourism’. We conclude that the developers of simulated humans have useful contributions to make to debates about moral patiency—and also have certain new responsibilities in relation to the simulations they build.

Highlights

  • While most Artificial Intelligence (AI) systems perform specific tasks, such as playing Atari games or translating sentences, an increasing number aim to provide a more ‘complete’ model of a human

  • We are not being ‘biological chauvinists’, because our method is to look for features of our simplified brain model that are relevant in according ethical status, that could be found in all manner of nonbiological agents

  • This paper contributes to an ongoing discussion about the moral status of simulated human-AI agents

Read more

Summary

Introduction

While most Artificial Intelligence (AI) systems perform specific tasks, such as playing Atari games or translating sentences, an increasing number aim to provide a more ‘complete’ model of a human These more holistic AI agents have humanlike bodies, implemented in robot hardware or computer graphics, as well as a range of humanlike abilities. The ethical questions for neurorobotic agents focus on the implications for building computer models that explicitly aim to reproduce the functionality of human or other biological brains [2]. BabyX is a platform for neurorobotics research: it allows us to implement embodied models of human brain mechanisms, and to test these observing whether the simulated baby’s behaviour is similar to that of real babies.

Features of BabyX that highlight ethical questions
BabyX is an avatar
BabyX’s brain model
An embodied model of cognition
A model of episodic memory
A model in development
Motivation for the BabyX model
What is it to ‘mistreat’ a baby?
User‐related ethical issues for BabyX
User‐related ethical issues: some practical conclusions
Agent‐related issues for BabyX
Danaher’s ethical behaviourism
Assessing ethical behaviourism with implemented agent models
An agent that ‘fakes’ a behavioural claim to ethical status
BabyX’s model of emotions
A comparison between BabyX and the Level 2
Agent‐related ethical issues: some practical conclusions
Summary and conclusions
34. Soul Machines
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.