Abstract

This paper studies method of simulated moments (MSM) estimators that are implemented using Bayesian methods, specifically Markov chain Monte Carlo (MCMC). Motivation and theory for the methods is provided by Chernozhukov and Hong (2003). The paper shows, experimentally, that confidence intervals using these methods may have coverage which is far from the nominal level, a result which has parallels in the literature that studies overidentified GMM estimators. A neural network may be used to reduce the dimension of an initial set of moments to the minimum number that maintains identification, as in Creel (2017). When MSM-MCMC estimation and inference is based on such moments, and using a continuously updating criteria function, confidence intervals have statistically correct coverage in all cases studied. The methods are illustrated by application to several test models, including a small DSGE model, and to a jump-diffusion model for returns of the S&P 500 index.

Highlights

  • It has long been known that classical inference methods based on first-order asymptotic theory, when applied to the generalized method of moments estimator, may lead to unreliable results, in the form of substantial finite sample biases and variances, and incorrect coverage of confidence intervals, especially when the model is overidentified (Donald et al 2009; Hall and Horowitz 1996; Hansen et al 1996; Tauchen 1986)

  • The reason that a larger number of samples was not used is that it was desired to obtain results that may be more relevant for cases where it is more costly to simulate from the model, as is the case of the jump diffusion model studied below

  • This paper has shown, through Monte Carlo experimentation, that confidence intervals based upon quantiles of a tuned Markov chain Monte Carlo (MCMC) chain may have coverage which is far from the nominal level, even for simple models with few parameters

Read more

Summary

Introduction

It has long been known that classical inference methods based on first-order asymptotic theory, when applied to the generalized method of moments estimator, may lead to unreliable results, in the form of substantial finite sample biases and variances, and incorrect coverage of confidence intervals, especially when the model is overidentified (Donald et al 2009; Hall and Horowitz 1996; Hansen et al 1996; Tauchen 1986) In another strand of the literature, Chernozhukov and Hong (2003) introduced Laplace-type estimators, which allow for estimation and inference with classical statistical methods (those which are defined by optimization of an objective function) to be done by working with the elements of a tuned Markov chain, so that potentially difficult or unreliable steps such as optimization or computation of asymptotic standard errors, etc., may be avoided. A 95% confidence interval for a parameter θ j is given by the interval ( Qθ j (0.025), Qθ j (0.975)), where Qθ j (τ ) is the τth quantile of the R values of the parameter θ j in the chain C

Neural Moments
Examples
Stochastic Volatility
Mixture of Normals
DSGE Model
Monte Carlo Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.