Abstract
Robots are used for language tutoring increasingly often, and commonly programmed to display non-verbal communicative cues such as eye gaze and pointing during robot-child interactions. With a human speaker, children rely more strongly on non-verbal cues (pointing) than on verbal cues (labeling) if these cues are in conflict. However, we do not know how children weigh the non-verbal cues of a robot. Here, we assessed whether four- to six-year-old children (i) differed in their weighing of non-verbal cues (pointing, eye gaze) and verbal cues provided by a robot versus a human; (ii) weighed non-verbal cues differently depending on whether these contrasted with a novel or familiar label; and (iii) relied differently on a robot’s non-verbal cues depending on the degree to which they attributed human-like properties to the robot. The results showed that children generally followed pointing over labeling, in line with earlier research. Children did not rely more strongly on the non-verbal cues of a robot versus those of a human. Regarding pointing, children who perceived the robot as more human-like relied on pointing more strongly when it contrasted with a novel label versus a familiar label, but children who perceived the robot as less human-like did not show this difference. Regarding eye gaze, children relied more strongly on the gaze cue when it contrasted with a novel versus a familiar label, and no effect of anthropomorphism was found. Taken together, these results show no difference in the degree to which children rely on non-verbal cues of a robot versus those of a human and provide preliminary evidence that differences in anthropomorphism may interact with children’s reliance on a robot’s non-verbal behaviors.
Highlights
Children are increasingly often exposed to new technologies in educational settings, such as applications on tablets and smartphones
The primary aim of the current study is to investigate whether children rely on a robot’s pointing and eye gaze if these are contrasted with verbal labels to the same extent as on a human speaker’s pointing and eye gaze
The current study addresses three questions: 1. How do children weigh non-verbal cues and verbal cues from a robot versus a human speaker?
Summary
Children are increasingly often exposed to new technologies in educational settings, such as applications on tablets and smartphones. One recent technology that has been employed for educational purposes involves social robots [1]. Children’s reliance on a robot’s non-verbal cues and two robotics companies (QBMT, Softbank Robotics). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.