Abstract

In reinforcement learning (RL) tasks, decision makers learn the values of actions in a context-dependent fashion. Although context dependence has many advantages, it can lead to suboptimal preferences when choice options are extrapolated beyond their original encoding contexts. Here, we tested whether we could manipulate context dependence in RL by introducing a secondary task designed to bias attention toward either absolute or relative outcomes. Participants completed a learning phase that involved choices between two (Experiment 1; n = 111) or three (Experiment 2; n = 90) options per trial with complete feedback. Choice options were grouped in stable contexts so that only a small set of the possible combinations were encountered. One group of participants rated how they felt about particular options (Feelings condition), and another group reported how much they expected to win from particular options (Outcomes condition) at occasional points throughout the learning phase. A third group (Control condition) made no ratings. In the subsequent transfer test, participants chose between all possible pairs of options without feedback. The experimental manipulation had no effect on learning phase performance but a significant effect on transfer, with the Feelings and Control conditions exhibiting greater context dependence than the Outcomes condition. Further, rated feelings reflected relative valuation whereas expected outcomes were more sensitive to absolute option values. Hierarchical Bayesian modeling was used to summarize the findings from both experiments. Our results suggest that attending to affective reactions versus expected outcomes moderates the effects of encoding context on subsequent choices. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call