Abstract

Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 2005)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.