Abstract

Although praised for their rationality, humans often make poor decisions, even in simple situations. In the repeated binary choice experiment, an individual has to choose repeatedly between the same two alternatives, where a reward is assigned to one of them with fixed probability. The optimal strategy is to perseverate with choosing the alternative with the best expected return. Whereas many species perseverate, humans tend to match the frequencies of their choices to the frequencies of the alternatives, a sub-optimal strategy known as probability matching. Our goal was to find the primary cognitive constraints under which a set of simple evolutionary rules can lead to such contrasting behaviors. We simulated the evolution of artificial populations, wherein the fitness of each animat (artificial animal) depended on its ability to predict the next element of a sequence made up of a repeating binary string of varying size. When the string was short relative to the animats’ neural capacity, they could learn it and correctly predict the next element of the sequence. When it was long, they could not learn it, turning to the next best option: to perseverate. Animats from the last generation then performed the task of predicting the next element of a non-periodical binary sequence. We found that, whereas animats with smaller neural capacity kept perseverating with the best alternative as before, animats with larger neural capacity, which had previously been able to learn the pattern of repeating strings, adopted probability matching, being outperformed by the perseverating animats. Our results demonstrate how the ability to make predictions in an environment endowed with regular patterns may lead to probability matching under less structured conditions. They point to probability matching as a likely by-product of adaptive cognitive strategies that were crucial in human evolution, but may lead to sub-optimal performances in other environments.

Highlights

  • Politics, and the social sciences, it is often assumed that humans make rational decisions, especially in simple situations that repeat themselves [1]

  • The so-called rational choice theory models human beings as agents who go about achieving their self-interested goals in the best possible way, maximizing their expected utility [2]

  • We propose an artificial life model that helps us understand how being selected for learning structured patterns may lead to probability matching, and how failure to learn them leads to perseveration

Read more

Summary

Introduction

Politics, and the social sciences, it is often assumed that humans make rational decisions, especially in simple situations that repeat themselves [1]. The so-called rational choice theory models human beings as agents who go about achieving their self-interested goals in the best possible way, maximizing their expected utility [2]. This theory, often in conjunction with game theory, is used to predict the behavior of individuals. When asked to predict the element in a sequence of coin tosses, for instance, many people believe that the chance of getting a tail increases after several heads in a row [3]. This belief, known as the gambler’s fallacy, is incorrect and may lead to sub-optimal performance

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.