Abstract

Publisher Summary This chapter examines whether refinements based on forward induction or simple adaptive learning models are better able to capture behavior in signaling game experiments. This chapter also finds that observed behavior is inconsistent with either the equilibrium refinements literature or pure belief-based adaptive learning models. An augmented adaptive learning model in which some players recognize the existence of dominated strategies and their consequences successfully captures the major qualitative features of the data.Equilibrium refinements do quite poorly in capturing our results. The refinements say nothing about the observed dynamics of play and cannot predict observed differences between Treatments I and IB and between Treatments II, III, and IV. The results of Treatment V are completely inconsistent with any refinement to sequential equilibrium. All of these results can be characterized by a simple belief-based learning model augmented to allow for some limited reasoning ability by players.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call