Many time series encountered in practice are nonstationary, and instead are often generated from a process with a unit root. Because of the process of data collection or the practice of researchers, time series used in analysis and modeling are frequently obtained through temporal aggregation. As a result, the series used in testing for a unit root are often time series aggregates. In this paper, we study the effects of the use of aggregate time series on the Dickey–Fuller test for a unit root. We start by deriving a proper model for the aggregate series. Based on this model, we find the limiting distributions of the test statistics and illustrate how the tests are affected by the use of aggregate time series. The results show that those distributions shift to the right and that this effect increases with the order of aggregation, causing a strong impact both on the empirical significance level and on the power of the test. To correct this problem, we present tables of critical points appropriate for the tests based on aggregate time series and demonstrate their adequacy. Examples illustrate the conclusions of our analysis.
Read full abstract