Abstract

Humans can learn and store multiple visuomotor mappings (dual-adaptation) when feedback for each is provided alternately. Moreover, learned context cues associated with each mapping can be used to switch between the stored mappings. However, little is known about the associative learning between cue and required visuomotor mapping, and how learning generalises to novel but similar conditions. To investigate these questions, participants performed a rapid target-pointing task while we manipulated the offset between visual feedback and movement end-points. The visual feedback was presented with horizontal offsets of different amounts, dependent on the targets shape. Participants thus needed to use different visuomotor mappings between target location and required motor response depending on the target shape in order to “hit” it. The target shapes were taken from a continuous set of shapes, morphed between spiky and circular shapes. After training we tested participants performance, without feedback, on different target shapes that had not been learned previously. We compared two hypotheses. First, we hypothesised that participants could (explicitly) extract the linear relationship between target shape and visuomotor mapping and generalise accordingly. Second, using previous findings of visuomotor learning, we developed a (implicit) Bayesian learning model that predicts generalisation that is more consistent with categorisation (i.e. use one mapping or the other). The experimental results show that, although learning the associations requires explicit awareness of the cues’ role, participants apply the mapping corresponding to the trained shape that is most similar to the current one, consistent with the Bayesian learning model. Furthermore, the Bayesian learning model predicts that learning should slow down with increased numbers of training pairs, which was confirmed by the present results. In short, we found a good correspondence between the Bayesian learning model and the empirical results indicating that this model poses a possible mechanism for simultaneously learning multiple visuomotor mappings.

Highlights

  • When interacting with the world around us, we largely depend on prior knowledge about the structure of the world and the relationships between the sensory signals resulting from it in order to choose the appropriate action to achieve our goal

  • How do we learn such context-dependent distortions and how does such learning generalise?. We translated this question into a target-pointing task in which different target shapes were each associated with a different distortion

  • We found that participants did not use the trained linear relationship, but rather weighed the learned shape-contexts according to their similarity to the current test-shape

Read more

Summary

Introduction

When interacting with the world around us, we largely depend on prior knowledge about the structure of the world and the relationships between the sensory signals resulting from it in order to choose the appropriate action to achieve our goal. This prior knowledge is not necessarily fixed. We perceive objects to our left more to the left and objects to the right more to the right Due to this changed relationship we may initially experience some problems when trying to look at or reach for any object, but we adapt our behaviour to this changed relationship in a relatively short amount of time We can learn the association between visual location and required movement to successfully aim for a target

Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call