Abstract
Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations.
Highlights
Understanding the movements and actions of others is critical for survival in many species
Since the Robot is Non-Human in both motion and appearance, we compared the Android and the Human, testing an effect of Human Motion while maintaining constant Human Appearance. In this multivariate analysis of variances (MANOVAs), again we found a significant interaction of Condition × Arm × Motion [F(1,26) = 9.53, p = 0.005, η2p = 0.27], as well as a significant Motion × Time interaction [F(9,234) = 2.15, p = 0.03, η2p = 0.08], indicating that the EMG response is sensitive to Human Motion
Looking at the empirical data on the mirror neuron system (MNS) and embodiment, and not necessarily the interpretation of said data, it is difficult to remain unconvinced that some degree of motor processing is an important and critical part of action understanding
Summary
Understanding the movements and actions of others is critical for survival in many species. The neural network in the human brain that supports action processing includes multiple brain areas, including neural systems related to visual processing of body form and motion, and the fronto-parietal mirror neuron system (MNS), which supports action understanding via analysis-bysynthesis (Rizzolatti et al, 2001; Saygin, 2012). There have been studies of action processing and the MNS that manipulated visual stimulus properties such as body form and biological motion (e.g., Buccino et al, 2004; Saygin et al, 2004b; Casile et al, 2010; van Kemenade et al, 2012; Miller and Saygin, 2013), detailed manipulation of visual stimulus parameters to specify response properties of the MNS has not been as common an approach, possibly because mirror neurons are thought to encode high-level information such as action goals regardless of the specific sensory signals that transmit such information (Rizzolatti and Craighero, 2004). Going forward, a thorough understanding of the functional architecture of the relevant networks will be essential as a foundation for building less simplistic and more complete neuro-computational accounts of action understanding
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have