Abstract

Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.

Highlights

  • These findings and others raise the intriguing possibility that young infants may be able to detect and use the equivalences between felt acts of the self and visible acts of the other[7] prior to language and before they have compared self and other in a mirror

  • Inspired by the idea of a social identity function for imitation, we show that a computational learning architecture (Fig. 1a) combining a N.N. for extraction of visual features (VF), robot’s motor internal state (MIS), motor internal state prediction (MISP) that associates VF and MIS activities allowing posture recognition, short term memory (STM), and novelty detection (Fig. 1b) was able to learn through mutual imitation encounters how to recognize a person at a later point in time

  • When a new participant is introduced during the learning phase, the novelty module shows an important activity and a brief synchronous hit occurs in the Person Recognition N.N. corresponding to the recruitment of a specific artificial neuron per participant

Read more

Summary

Introduction

These findings and others raise the intriguing possibility that young infants may be able to detect and use the equivalences between felt acts of the self and visible acts of the other[7] prior to language and before they have compared self and other in a mirror. It has been demonstrated that in interpersonal interactions preverbal infants do not just recognize that another moves when they move (temporal contingency), but that another acts in the same manner as they do (structural congruence)[6,7,8] This has been shown by measures of increased attention and positive affect at being imitated, as well as by neuroscience measures acquired during mutual imitation episodes (mu rhythm responses in the infant electroencephalogram, EEG)[9]. Inspired by the idea of a social identity function for imitation, we show that a computational learning architecture (Fig. 1a) combining a N.N. for extraction of visual features (VF), robot’s motor internal state (MIS), motor internal state prediction (MISP) that associates VF and MIS activities allowing posture recognition, short term memory (STM), and novelty detection (Fig. 1b) was able to learn through mutual imitation encounters how to recognize a person at a later point in time. We hypothesized that the sensory-motor architecture learns connections between perceptions and actions, and the self-assessment (prediction error) allows the robot to detect a new event (in our experiments, a new partner)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call