Abstract
The present study aimed to assess the extent to which human participants co-represent the lexico-semantic processing of a humanoid robot partner. Specifically, we investigated whether participants would engage their speech production system to predict the robot's upcoming words, and how they would progressively adapt to the robot's verbal behavior. In the experiment, a human participant and a robot alternated in naming pictures of objects from 15 semantic categories, while the participant's electrophysiological activity was recorded. We manipulated word frequency as a measure of lexical access, with half of the pictures associated with high-frequency names and the other half with low-frequency names. Additionally, the robot was programmed to provide semantic category labels (e.g., "tool" for the picture of a hammer) instead of the more typical basic-level names (e.g., "hammer") for items in five categories. Analysis of the stimulus-locked activity revealed a comparable Event-Related Potential (ERP) associated with word frequency both when it was the participant's and the robot's turn to speak. Analysis of the response-locked activity showed a different pattern for the category and basic-level responses in the first but not in the second part of the experiment, suggesting that participants adapted to the robot's lexico-semantic patterns over time. These findings provide empirical evidence for two key points: (1) participants engage their speech production system to predict the robot's upcoming words, and (2) partner-adaptive behavior facilitates comprehension of the robot's speech.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have