Abstract

The development of AI that can socially engage with humans is exciting to imagine, but such advanced algorithms might prove harmful if people are no longer able to detect when they are interacting with non-humans in online environments. Because we cannot fully predict how socially intelligent AI will be applied, it is important to conduct research into how sensitive humans are to behaviors of humans compared to those produced by AI. This paper presents results from a behavioral Turing Test, in which participants interacted with a human, or a simple or “social” AI within a complex videogame environment. Participants (66 total) played an open world, interactive videogame with one of these co-players and were instructed that they could interact non-verbally however they desired for 30 min, after which time they would indicate their beliefs about the agent, including three Likert measures of how much participants trusted and liked the co-player, the extent to which they perceived them as a “real person,” and an interview about the overall perception and what cues participants used to determine humanness. T-tests, Analysis of Variance and Tukey's HSD was used to analyze quantitative data, and Cohen's Kappa and χ2 was used to analyze interview data. Our results suggest that it was difficult for participants to distinguish between humans and the social AI on the basis of behavior. An analysis of in-game behaviors, survey data and qualitative responses suggest that participants associated engagement in social interactions with humanness within the game.

Highlights

  • The concept of Artificial Intelligence (AI) is not new

  • Our study attempted to shed light on how sensitive humans are to complex behaviors of human and AI co-players within a naturalistic game environment

  • We compared participants’ accuracy in distinguishing between the behaviors of human co-players and those of AI co-players that were “simplistic”, or “social,” with a builtin capacity to sense social cues and determine for themselves how to interact with participants using cognitively plausible, humanlike motivations

Read more

Summary

Introduction

The concept of Artificial Intelligence (AI) is not new. Alan Turing, the father of computer science, predicted that truly “intelligent” machines would appear around the year 2000 (Turing, 1950). Advances in deep learning have produced near human-level performance in image and speech recognition (LeCun et al, 2015); recent algorithms have even surpassed. There is the societal fear that AI will be used in ways that are detrimental to the general public (Piper, 2019). Some benevolent AI creators have used the technology to protect rainforests (Liu et al, 2019), or create diagnostic algorithms that can detect breast cancer better than human experts (McKinney et al, 2020). Other applications can produce undesirable consequences for the general public, such as job loss as a result of automation (Reisinger, 2019), or racial discrimination resulting from biased algorithms used by the U.S criminal justice system (Angwin et al, 2016)

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call