Abstract
We propose a framework in order to econometrically estimate case-based learning and apply it to empirical data from twelve 2 × 2 mixed strategy equilibria experiments. Case-based learning allows agents to explicitly incorporate information available to the experimental subjects in a simple, compact, and arguably natural way. We compare the estimates of case-based learning to other learning models (reinforcement learning and self-tuned experience weighted attraction learning) while using in-sample and out-of-sample measures. We find evidence that case-based learning explains these data better than the other models based on both in-sample and out-of-sample measures. Additionally, the case-based specification estimates how factors determine the salience of past experiences for the agents. We find that, in constant sum games, opposing players’ behavior is more important than recency and, in non-constant sum games, the reverse is true.
Highlights
Economists across the discipline—micro and macro, theory and empirics—study the impact of learning on individual and social behavior
We find when using in-sample measures between the learning models that case-based learning (CBL) fits best, Reinforcement Learning (RL) fits second best, and self-tuning experience weighted attraction (EWA) fits third best
We demonstrate the estimation of a new learning model based on an existing decision theory, Case-based Decision Theory
Summary
Economists across the discipline—micro and macro, theory and empirics—study the impact of learning on individual and social behavior. We formulate a method to econometrically estimate Case-based Decision Theory (CBDT), introduced by Gilboa and Schmeidler [1], on individual choice data. Like Expected Utility (EU), CBDT is a decision theory: that is, it shows that if an agent’s choice behavior follows certain axioms, it can be rationalized with a particular mathematical representation of utility e.g., Von Neumann and Morgenstern [2], Savage [3]. The Expected Utility framework has states of the world, actions, and payoffs/outcomes. The CBDT framework retains actions and payoffs, but it replaces the set of states with a set of “problems”, or circumstances; essentially, vectors of information that describe the choice setting the agent faces. CBDT postulates that when an agent is confronted with a new problem, she asks herself: how similar is today’s problem to problems in memory? She uses those similarity-weighted problems to construct a forecasted payoff for each action, and chooses an action with the highest forecasted payoff
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.