Abstract

We perform extensive Monte Carlo experiments where we compare the performance of multiple methods for computing numerical standard errors (NSEs). NSE is an estimate of the standard deviation of a simulation result, if the simulation experiment were to be repeated many times. All considered methods fail in case of extreme serial correlation. On the other hand, we do need either extreme serial correlation or a rather short time series (with large serial correlation) to see a substantial failure of the initial sequence estimators, the spectral density estimator and the HAC kernel estimators. The bootstrap methods and batch means estimators perform relatively poorly, whereas prewhitening does make the HAC kernel estimators substantially better in experiments with large or extreme serial correlation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call