Abstract

We study the first-order asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of PIs employed in the estimation. This class nests several estimators proposed in the literature. By considering a “pseudo likelihood” criterion function, our estimator becomes the K-pseudo maximum likelihood (PML) estimator in Aguirregabiria and Mira (2002, 2007). By considering a “minimum distance” criterion function, it defines a new K-minimum distance (MD) estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007). First, we establish that the K-PML estimator is consistent and asymptotically normal for any K∈N. This complements findings in Aguirregabiria and Mira (2007), who focus on K=1 and K large enough to induce convergence of the estimator. Furthermore, we show under certain conditions that the asymptotic variance of the K-PML estimator can exhibit arbitrary patterns as a function of K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K∈N. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-PML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. The invariance result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-PML estimators. Our main result implies two new corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is efficient in the class of K-MD estimators for all K∈N. In other words, additional PIs do not provide first-order efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally efficient than any K-PML estimator for all K∈N. Finally, the Appendix provides appropriate conditions under which the optimal 1-MD estimator is efficient among regular estimators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call