Abstract

Artificial agents are becoming prevalent across human life domains. However, the neural mechanisms underlying human responses to these new, artificial social partners remain unclear. The uncanny valley (UV) hypothesis predicts that humans prefer anthropomorphic agents but reject them if they become too humanlike—the so-called UV reaction. Using fMRI, we investigated neural activity when subjects evaluated artificial agents and made decisions about them. Across two experimental tasks, the ventromedial prefrontal cortex (VMPFC) encoded an explicit representation of subjects' UV reactions. Specifically, VMPFC signaled the subjective likability of artificial agents as a nonlinear function of humanlikeness, with selective low likability for highly humanlike agents. In exploratory across-subject analyses, these effects explained individual differences in psychophysical evaluations and preference choices. Functionally connected areas encoded critical inputs for these signals: the temporoparietal junction encoded a linear humanlikeness continuum, whereas nonlinear representations of humanlikeness in dorsomedial prefrontal cortex (DMPFC) and fusiform gyrus emphasized a human–nonhuman distinction. Following principles of multisensory integration, multiplicative combination of these signals reconstructed VMPFC's valuation function. During decision making, separate signals in VMPFC and DMPFC encoded subjects' decision variable for choices involving humans or artificial agents, respectively. A distinct amygdala signal predicted rejection of artificial agents. Our data suggest that human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (humanlikeness in nonhuman agents). Thus, a basic principle known from sensory coding—neural feature selectivity from linear–nonlinear transformation—may also underlie human responses to artificial social partners.SIGNIFICANCE STATEMENT Would you trust a robot to make decisions for you? Autonomous artificial agents are increasingly entering our lives, but how the human brain responds to these new artificial social partners remains unclear. The uncanny valley (UV) hypothesis—an influential psychological framework—captures the observation that human responses to artificial agents are nonlinear: we like increasingly anthropomorphic artificial agents, but feel uncomfortable if they become too humanlike. Here we investigated neural activity when humans evaluated artificial agents and made personal decisions about them. Our findings suggest a novel neurobiological conceptualization of human responses toward artificial agents: the UV reaction—a selective dislike of highly humanlike agents—is based on nonlinear value-coding in ventromedial prefrontal cortex, a key component of the brain's reward system.

Highlights

  • Would you trust a robot to make personal choices for you? Artificial agents capable of decision making are becoming more prevalent across human life domains (Broadbent, 2017)

  • dorsomedial prefrontal cortex (DMPFC) activity followed the humanlikeness continuum for nonhuman agents but sharply increased for human agents (Fig. 3E–G). We modeled this activity with a “human detection” regressor (Fig. 3F, a dummy variable distinguishing human from nonhuman stimuli) in addition to linear humanlikeness. ( Figs. 2C and 3G may look similar, it is important to note that these data are averaged across trials and subjects; our ROI analysis within each subject indicated that, whereas ventromedial prefrontal cortex (VMPFC) activity was best explained by joint likability and human likeness coding, DMPFC activity was best explained by a human detection regressor.) DMPFC activity emphasized differences between human and nonhuman stimuli, suggesting a role in distinguishing human from artificial agents

  • Functional connections existed between VMPFC and both DMPFC and FFG (Fig. 6J, magenta), but we found no direct coupling between VMPFC and temporoparietal junction (TPJ)

Read more

Summary

Introduction

Artificial agents capable of decision making are becoming more prevalent across human life domains (Broadbent, 2017). Would you trust a robot to make personal choices for you? Such artificial (i.e., synthetic, not naturally occurring) agents can elicit positive emotions but they can make humans uncomfortable. The authors declare no competing financial interests.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.