Abstract

Humans interpret and predict others’ behaviors by ascribing intentions or beliefs, or in other words, by adopting the intentional stance. Since artificial agents are increasingly populating our daily environments, the question arises whether (and under which conditions) humans would apply the “human model” to understand the behaviors of these new social agents. Thus, in a series of three experiments, we tested whether embedding humans in a social interaction with a humanoid robot either displaying a human-like or machine-like behavior would modulate their initial tendency to adopt the intentional stance. Results showed that indeed humans are more prone to adopt the intentional stance after having interacted with a more socially available and human-like robot, while no modulation of the adoption of the intentional stance emerged toward a mechanistic robot. We conclude that short experiences with humanoid robots presumably inducing a “like-me” impression and social bonding increase the likelihood of adopting the intentional stance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call