Abstract

This paper presents a case of parsimony and generalization in model comparisons. We submitted two versions of the same cognitive model to the Market Entry Competition (MEC), which involved four-person and two-alternative (enter or stay out) games. Our model was designed according to the Instance-Based Learning Theory (IBLT). The two versions of the model assumed the same cognitive principles of decision making and learning in the MEC. The only difference between the two models was the assumption of homogeneity among the four participants: one model assumed homogeneous participants (IBL-same) while the other model assumed heterogeneous participants (IBL-different). The IBL-same model involved three free parameters in total while the IBL-different involved 12 free parameters, i.e., three free parameters for each of the four participants. The IBL-different model outperformed the IBL-same model in the competition, but after exposing the models to a more challenging generalization test (the Technion Prediction Tournament), the IBL-same model outperformed the IBL-different model. Thus, a loser can be a winner depending on the generalization conditions used to compare models. We describe the models and the process by which we reach these conclusions.

Highlights

  • A choice prediction competition was organized by Erev, Ert, and Roth [1]

  • The cognitive model submitted to the Market Entry Competition (MEC) is based on a cognitive theory of decisions from experience, Instance-Based Learning Theory (IBLT), originally developed to explain and predict learning and decision making in dynamic decision-making environments [2]

  • 120 problems: the average proportion of risky choices (R-rate) across the 120 problems and the average proportion of alternations (A-rate) across the 120 problems

Read more

Summary

Introduction

A choice prediction competition was organized by Erev, Ert, and Roth [1]. This modeling competition focused on decisions from experience in market entry games (hereafter, market entry competition, MEC, http://sites.google.com/site/gpredcomp/). Human data from an estimation set was made available for researchers, who used it to calibrate their models These models were submitted to compete over the best predictive value in a new dataset called the competition set. One version of the IBL model assumed that the four players in the game had identical characteristics As it will be explained later, this model, called IBL-same, included the same set of parameter values for each of the four players in the game. The other version of the IBL model, called IBL-different, assumed heterogeneity of the four players in the game, and included different sets of parameter values for each of the four players in the game. We discuss the calibration (or fit) of each model to the estimation set, and present our a-priori expectations of the performance of the two models in the competition set of the MEC. We discuss the main lessons learned from our participation in the MEC

Market Entry Competition and Behavioral Methods
Competition Criteria and Dataset
An Instance-Based Learning Model
The IBL Model for the MEC
Inertia Mechanism
The General IBLT Mechanisms
Special Treatment of the First Trial
Two Versions of the IBL Model
Optimization of Parameters through a Genetic Algorithm
Results of MEC
Why Did the IBL-same Performed Worse than IBL-different?
A Challenging Generalization: the Technion Prediction Tournament
Adapting the IBL Model of the MEC to the TPT
Results from the Generalization from MEC to TPT
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call