Abstract

What mechanisms distinguish interactive from non-interactive actions? To answer this question we tested participants while they took turns playing music with a virtual partner: in the interactive joint action condition, the participants played a melody together with their partner by grasping (C note) or pressing (G note) a cube-shaped instrument, alternating in playing one note each. In the non-interactive control condition, players’ behavior was not guided by a shared melody, so that the partner’s actions and notes were irrelevant to the participant. In both conditions, the participant’s and partner’s actions were physically congruent (e.g., grasp-grasp) or incongruent (e.g., grasp-point), and the partner’s association between actions and notes was coherent with the participant’s or reversed. Performance in the non-interactive condition was only affected by physical incongruence, whereas joint action was only affected when the partner’s action-note associations were reversed. This shows that task interactivity shapes the sensorimotor coding of others’ behaviors, and that joint action is based on active prediction of the partner’s action effects rather than on passive action imitation. We suggest that such predictions are based on Dyadic Motor Plans that represent both the agent’s and the partner’s contributions to the interaction goal, like playing a melody together.

Highlights

  • What mechanisms distinguish interactive from non-interactive actions? To answer this question we tested participants while they took turns playing music with a virtual partner: in the interactive joint action condition, the participants played a melody together with their partner by grasping (C note) or pressing (G note) a cube-shaped instrument, alternating in playing one note each

  • Each musical sequence was divided in two trials of different “type”, Trial-type[1] and Trial-type[2], because the instructions informed participants on what to do in two consecutive trials: in Trial-type[1], they observed their partner’s action before being cued which note they had to play, whereas in Trial-type[2] they already knew what to do before observing their partner’s action because they had already seen the cue in the preceding trial

  • Follow-up ANOVA of Trial-type[1] data was consistent with these results: the analysis showed a main effect of Association and a pattern incompatible with the presence of visuomotor interference, contradicting what would be predicted by the Dual-Route hypothesis and providing further evidence supporting the Dyadic Motor Plan hypothesis

Read more

Summary

Introduction

What mechanisms distinguish interactive from non-interactive actions? To answer this question we tested participants while they took turns playing music with a virtual partner: in the interactive joint action condition, the participants played a melody together with their partner by grasping (C note) or pressing (G note) a cube-shaped instrument, alternating in playing one note each. Performance in the non-interactive condition was only affected by physical incongruence, whereas joint action was only affected when the partner’s action-note associations were reversed This shows that task interactivity shapes the sensorimotor coding of others’ behaviors, and that joint action is based on active prediction of the partner’s action effects rather than on passive action imitation. It has been suggested that observation of others’ actions automatically triggers action simulation (due to bottom-up visuomotor associations10), whereas top-down, rule-based associations are enlisted to perform a motor response that differs from the observed one (Dual-Route hypothesis, see[14], and similar accounts[11,12]) This view is supported by evidence in patients with frontal lesions. Having lost the ability to top-down control www.nature.com/scientificreports/

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call