Abstract

Most real-life cues exhibit certain inherent values that may interfere with or facilitate the acquisition of new expected values during associative learning. In particular, when inherent and acquired values are congruent, learning may progress more rapidly. Here we investigated such an influence through a 2 × 2 factorial design, using attractiveness (high/low) of the facial picture as a proxy for the inherent value of the cue and its reward probability (high/low) as a surrogate for the acquired value. Each picture was paired with a monetary win or loss either congruently or incongruently. Behavioral results from 32 human participants indicated both faster response time and faster learning rate for value-congruent cue-outcome pairings. Model-based fMRI analysis revealed a fractionation of reinforcement learning (RL) signals in the ventral striatum, including a strong and novel correlation between the cue-specific decaying learning rate and BOLD activity in the ventral caudate. Additionally, we detected a functional link between neural signals of both learning rate and reward prediction error in the ventral striatum, and the signal of expected value in the ventromedial prefrontal cortex, showing a novel confirmation of the mathematical RL model via functional connectivity. Most real-world decisions require the integration of inherent value and sensitivity to outcomes to facilitate adaptive learning. Inherent value is drawing increasing interest from decision scientists because it influences decisions in contexts ranging from advertising to investing. This study provides novel insight into how inherent value influences the acquisition of new expected value during associative learning. Specifically, we find that the congruence between the inherent value and the acquired reward influences the neural coding of learning rate. We also show for the first time that neuroimaging signals coding the learning rate, prediction error, and acquired value follow the multiplicative Rescorla-Wagner learning rule, a finding predicted by reinforcement learning theory.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.