Abstract

Value-based decision making in complex environments, such as those with uncertain and volatile mapping of reward probabilities onto options, may engender computational strategies that are not necessarily optimal in terms of normative frameworks but may ensure effective learning and behavioral flexibility in conditions of limited neural computational resources. In this article, we review a suboptimal strategy – additively combining reward magnitude and reward probability attributes of options for value-based decision making. In addition, we present computational intricacies of a recently developed model (named MIX model) representing an algorithmic implementation of the additive strategy in sequential decision-making with two options. We also discuss its opportunities; and conceptual, inferential, and generalization issues. Furthermore, we suggest future studies that will reveal the potential and serve the further development of the MIX model as a general model of value-based choice making.

Highlights

  • A fundamental assumption in classical economics is that reward magnitudes and reward probabilities, following the expected utility theory (von Neumann and Morgenstern, 1947), are integrated in optimal way, that is, multiplicatively, for deriving option values and making choices

  • We review a suboptimal strategy – additively combining reward magnitude and reward probability attributes of options for valuebased decision making

  • High values of parameter ω favor high reliance on state beliefs compared to utility information, yielding relatively safe choices, whereas the opposite indicates riskseeking choices. Both state beliefs and utilities of choice options are derived through a multi-step computational algorithm, which as it was shown by Rouault et al (2019) represent a mechanistic neurocomputational account of human choices in an uncertain and volatile environment of value-based decision making

Read more

Summary

Introduction

A fundamental assumption in classical economics is that reward magnitudes and reward probabilities (computational components), following the expected utility theory (von Neumann and Morgenstern, 1947), are integrated in optimal way, that is, multiplicatively, for deriving option values and making choices. Both state beliefs and utilities of choice options are derived through a multi-step computational algorithm, which as it was shown by Rouault et al (2019) represent a mechanistic neurocomputational account of human choices in an uncertain and volatile environment of value-based decision making.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call