Abstract

We simulate a horse race between several behavioral models of play in one-shot games. First, we find that many models can lead to identical predictions, making it impossible to select a unique winning model. This is largely avoided by comparing only two models. But even then we find that cross-validation sometimes fails to select the true model, often because models are be estimated to be noiseless but then fail to predict out-of-sample data. The Bayesian Information Criterion avoids this problem, though the inflexibility of its parameter penalty appears to cause poor performance in certain settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call