Abstract

When asked to move their unseen hand-to-visual targets, people exhibit idiosyncratic but reliable visuo-proprioceptive matching errors. Unsurprisingly, vision and proprioception quickly align when these errors are made apparent by providing visual feedback of the position of the hand. However, retention of this learning is limited, such that the original matching errors soon reappear when visual feedback is removed. Several recent motor learning studies have shown that reward feedback can improve retention relative to error feedback. Here, using a visuo-proprioceptive position-matching task, we examined whether binary reward feedback can be effectively exploited to reduce matching errors and, if so, whether this learning leads to improved retention relative to learning based on error feedback. The results show that participants were able to adjust the visuo-proprioceptive mapping with reward feedback, but that the level of retention was similar to that observed when the adjustment was accomplished with error feedback. Therefore, similar to error feedback, reward feedback allows for temporary recalibration, but does not support long-lasting retention of this recalibration.

Highlights

  • The ability to learn new motor skills and adapt movements to changes in the environment is essential to successful performance in daily tasks

  • We compared the retention of veridical visuo-proprioceptive alignment learned through error feedback and binary reward feedback

  • We showed that (1) binary reward feedback is effective in reducing biases in a position-matching task, but (2) this reinforcement learning does not result in greater retention than error-based learning

Read more

Summary

Introduction

The ability to learn new motor skills and adapt movements to changes in the environment is essential to successful performance in daily tasks. In situations that require a more complex sequence of actions to achieve the goal, or where the error is not evaluated, such as learning how to make a playground swing go higher, one has to learn based on success and failure. These reinforcement signals are inherently unsigned, and, do not give information about the required change in behavior to learn the task. Error-based and reinforcement learning are thought to rely on different neural mechanisms.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.