Abstract

Mentalizing is the process of inferencing others’ mental states and contributes to an inferential system known as Theory of Mind (ToM)—a system that is critical to human interactions as it facilitates sense-making and prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency—and are increasingly integrated into contemporary social life—it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1 ) and physically co-present interactions (Study 2 ). Findings suggest that mentalizing tendencies for robots and humans are more alike than different, however use of non-literal language, co-present interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call