Abstract

In recent research [B. Seo, Distribution theory for unit root tests with conditional heteroskedasticity, J. Econometrics 91 (1999) 113–144] has suggested that the examination of the unit root hypothesis in series exhibiting GARCH behaviour should proceed via joint maximum likelihood (ML) estimation of the unit root testing equation and GARCH process. The results presented show the asymptotic distribution of the resulting ML t-test to be a mixture of the Dickey–Fuller and standard normal distributions. In this paper, the relevance of these asymptotic arguments is considered for the finite samples encountered in empirical research. In particular, the influences of sample size, alternative values of the parameters of the GARCH process and the use of the Bollerslev–Wooldridge covariance matrix estimator upon the finite-sample distribution of the ML t-statistic are explored. It is shown that the resulting critical values for the ML t-statistic are similar to those of the Dickey–Fuller distribution rather than the standard normal, unless a large sample size and empirically unrealistic values of the volatility parameter of the GARCH process are considered. Use of the Bollerslev–Wooldridge standard covariance matrix estimator exaggerates this finding, causing a leftward shift in the finite-sample distribution of the ML t-statistic. The results of the simulation analysis are illustrated via an application to U.S. short term interest rates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call