Abstract

Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge. Here, we focus on a central tool for social interaction: verbal communication. We assess the extent to which humans co-represent (simulate and predict) a robot’s verbal actions. During a joint picture naming task, participants took turns in naming objects together with a social robot (Pepper, Softbank Robotics). Previous findings using this task with human partners revealed internal simulations on behalf of the partner down to the level of selecting words from the mental lexicon, reflected in partner-elicited inhibitory effects on subsequent naming. Here, with the robot, the partner-elicited inhibitory effects were not observed. Instead, naming was facilitated, as revealed by faster naming of word categories co-named with the robot. This facilitation suggests that robots, unlike humans, are not simulated down to the level of lexical selection. Instead, a robot’s speaking appears to be simulated at the initial level of language production where the meaning of the verbal message is generated, resulting in facilitated language production due to conceptual priming. We conclude that robots facilitate core conceptualization processes when humans transform thoughts to language during speaking.

Highlights

  • Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge

  • We focus on a social humanoid robot (Pepper, Softbank Robotics)

  • We describe how preverbal thoughts are transformed into articulated speech that makes our thoughts accessible to others, highlighting the core mechanisms of language production targeted in the current study

Read more

Summary

Introduction

Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge. During a joint picture naming task, participants took turns in naming objects together with a social robot (Pepper, Softbank Robotics) Previous findings using this task with human partners revealed internal simulations on behalf of the partner down to the level of selecting words from the mental lexicon, reflected in partner-elicited inhibitory effects on subsequent naming. We assess the extent to which a social robot’s verbal actions, in social interaction with humans, are simulated and predicted, or in other words co-represented, and explore the consequences of robot verbal co-representation on human language production. Despite early findings suggesting that humans treat social technologies to real p­ eople[11], the intricacies of robot co-representation present a more complex picture (for a review see Ref.[10]) Investigations on this topic predominantly focused on motor co-representation of non-human agents and suggested that the shared representational system is predominantly tuned to other h­ umans[12,13]. Investigations in which participants completed real-life joint tasks with humanoid robots corroborated these conclusions revealing that humans can co-represent a humanoid robot in a joint task (Social Simon T­ ask18)[19,20]

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.