Abstract

It is often unclear whether time series displaying substantial persistence should be modelled as a vector autoregression in levels (perhaps with a trend term) or in differences. The impact of this decision on inference is examined here using Monte Carlo simulation. In particular, the size and power of variable inclusion (Granger causality) tests and the coverage of impulse response function confidence intervals are examined for simulated vector autoregression models using a variety of estimation techniques. We conclude that testing should be done using differenced regressors, but that overdifferencing a model yields poor impulse response function confidence interval coverage; modelling in Hodrick-Prescott filtered levels yields poor results in any case. We find that the lag-augmented vector autoregression method suggested by Toda and Yamamoto (1995) – which models the level of the series but allows for variable inclusion testing on changes in the series – performs well for both Granger causality testing and impulse response function estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call