Event Abstract Back to Event What is the hierarchical representation of tasks involving objects with complex internal dynamics? Anastasia Sylaidi1* and Aldo A. Faisal1, 2 1 Imperial College London, Dep. of Bioengineering, United Kingdom 2 Imperial College London, Dep. of Computing, United Kingdom Motor circuits, limb kinematics and many real-world tasks are organised hierarchically [1-2]. This prompts the question how the brain represents hierarchical task dependencies and how these are mapped onto the nervous system and control of movement. It was previously suggested that the CNS represents object manipulation tasks with regard to an intrinsic frame of reference centered on the body’s actuators and sensors and/or an extrinsic frame of reference related to task context, environmental settings or properties of point-like objects [3-5]. Ultimately, these two reference frames have to be linked to support the completion of high-level tasks [1], such as pouring wine into a glass. Here, we examine how joint- and body-based reference frames are organised as parts of a hierarchical structure of task representation that underlies motor learning. Towards this end we focus on the manipulation of objects with naturalistic internal dynamic. We test 3 possible hierarchical schemes of motor learning in object manipulation tasks: (a) In the first one task dynamics are principally learned and represented in intrinsic joint-based reference frames, on which the extrinsic reference frame of the object depends. This predicts, that humans generalise learned motor tasks over all joint space, but with regard to only one object orientation. (b) Conversely, in the second scheme, task dynamics are principally learned in an object-centered reference frame. This predicts generalisation across object orientations, but not across joint configurations. (c) In the third scheme both reference frames are learned independently. In order to test which of the three candidate mechanisms underlies motor control we conducted a behavioural study. Human subjects were asked to perform a rotational task within a given accuracy inside a 3D virtual reality setup, using a bottle of complex or static internal dynamics. In 2 different experiments subjects learned a single training task and were subsequently instructed to complete multiple testing tasks, in which object or joint positioning varied (experiment 1 and 2 respectively). Their performance was estimated in terms of pivot point displacement and used to examine how learning is transferred from training to testing conditions. Our results revealed a significant increase of pivot point displacement from training to testing phases in both experiments and for both object types. (Wilcoxon test, p<0.0125). This increase demonstrates a poor generalization capability of learned task dynamics to novel task contexts. This observation provides supporting evidence that task representations are independently learned for both the body-centered and object-centered coordinate frames and ad-hoc combined during motor actions. Our results lead to further investigations of the frames’ respective timescales, their potentially weighted contribution to motor learning, as well as the dependence of this weight on the nature of object dynamics.
Read full abstract