Abstract

The ongoing conversation on AI ethics and politics is in full swing and has spread to the general public. Rather than contributing by engaging with the issues and views discussed, we want to step back and comment on the widening conversation itself. We consider evolved human cognitive tendencies and biases, and how they frame and hinder the conversation on AI ethics. Primarily, we describe our innate human capacities known as folk theories and how we apply them to phenomena of different implicit categories. Through examples and empirical findings, we show that such tendencies specifically affect the key issues discussed in AI ethics. The central claim is that much of our mostly opaque intuitive thinking has not evolved to match the nature of AI, and this causes problems in democratizing AI ethics and politics. Developing awareness of how our intuitive thinking affects our more explicit views will add to the quality of the conversation.

Highlights

  • Our everyday thinking, in dealing with the world around us, mostly relies on evolved cognitive classifications and categorizations

  • 4 Department of Theology and Religious Studies, University of Helsinki, Helsinki, Finland in discussions of artificial intelligence (AI) ethics among the general public are, we argue here, partially explained by how AI is a counterintuitive concept [104, 105]

  • AI commonly refers to technology that has the capacity for making decisions either autonomously or through enhancing decisions made by humans

Read more

Summary

Introduction

In dealing with the world around us, mostly relies on evolved cognitive classifications and categorizations (see Atran [5], Boyer [23]). Due to these evolved capacities, we are able to predict changes in our environment and update these predictions rapidly [118]. We first describe the cognitive process of categorizing and show why the concept of artificial intelligence (AI) does not fit into our intuitive everyday categories. This lack of fit means that AI can be viewed as a moderately counterintuitive concept [116]. We will refer to AI in its narrow sense as an algorithm that functions purposefully in an at least partially predictable environment [179].1

Anthropomorphizing across categories
Is it ok to abuse a cute robot?
Anthropomorphism and the ideal of rationality of artificial intelligences
The “they only do what they have been programmed to do” fallacy
The doctrine of double effect and the problems of folk consequentialism
Values beyond safety
10 Egocentric teleology bias
11 Biases of wish fulfillment in risk estimation
12 Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.