Abstract

Value-based decision-making is ubiquitous in every-day life, and critically depends on the contingency between choices and their outcomes. Only if outcomes are contingent on our choices can we make meaningful value-based decisions. Here, we investigate the effect of outcome contingency on the neural coding of rewards and tasks. Participants performed a reversal-learning paradigm in which reward outcomes were contingent on trial-by-trial choices, and performed a ‘free choice’ paradigm in which rewards were random and not contingent on choices. We hypothesized that contingent outcomes enhance the neural coding of rewards and tasks, which was tested using multivariate pattern analysis of fMRI data. Reward outcomes were encoded in a large network including the striatum, dmPFC and parietal cortex, and these representations were indeed amplified for contingent rewards. Tasks were encoded in the dmPFC at the time of decision-making, and in parietal cortex in a subsequent maintenance phase. We found no evidence for contingency-dependent modulations of task signals, demonstrating highly similar coding across contingency conditions. Our findings suggest selective effects of contingency on reward coding only, and further highlight the role of dmPFC and parietal cortex in value-based decision-making, as these were the only regions strongly involved in both reward and task coding.

Highlights

  • Value-based decision-making is ubiquitous in every-day life, and critically depends on the contingency between choices and their outcomes

  • Each trial started with a ‘choose’ cue being presented on screen, indicating that subjects should choose which of the two mappings/tasks they want to perform in the current trial

  • Reward outcomes were contingent on the specific choice made in the trial, and contingencies changed across the course of the experiment

Read more

Summary

Introduction

Value-based decision-making is ubiquitous in every-day life, and critically depends on the contingency between choices and their outcomes. Our findings suggest selective effects of contingency on reward coding only, and further highlight the role of dmPFC and parietal cortex in value-based decision-making, as these were the only regions strongly involved in both reward and task coding. After implementing the chosen behavior[2], predicted and experienced outcomes are compared, and reward prediction errors are computed[3,4,5] This dopamine-mediated learning signal[6] indicates the need to update our internal models of action-outcome contingencies, which leads to an adaption of future behavior. Some initial evidence suggests that rewarding correct performance enhances neural task representations[19], but this work did not address the issue of varying degrees of control over choice outcomes. We tested whether task representations in these brain regions were enhanced when rewards were choice-contingent vs when they were not

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call