Abstract

In many daily tasks, we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning (RL) theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here, we show one-shot learning of sequences. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.

Highlights

  • In games, such as chess or backgammon, the players have to perform a sequence of many actions before a reward is received

  • While one-shot learning is a well-known phenomenon for tasks such as image recognition [35, 36] and one-step decision making [37, 38, 39] it has so far not been linked to Reinforcement Learning (RL) with eligibility traces in multi-step decision making

  • Eligibility traces are a fundamental factor underlying the human capability of quick learning and adaptation

Read more

Summary

Introduction

In games, such as chess or backgammon, the players have to perform a sequence of many actions before a reward is received (win, loss). One-shot learning requires algorithms that keep a memory of past states and actions making them eligible for later, i.e., delayed reinforcement. Such a memory is a key feature of the second class of RL theories – called RL with eligibility trace –, which includes algorithms with explicit eligibility traces [8, 9, 10, 11, 12] and related reinforcement learning models [1, 9, 13, 14, 15]

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call