The traditional rationale for differencing time series data is to attain stationarity. For a nearly non-stationary first-order autoregressive process—AR (1) with positive slope parameter near unity—we were led to a complementary rationale. If one suspects near non-stationarity of the AR (1) process, if the sample size is ‘small’ or ‘moderate’, and if good one-step-ahead prediction performance is the goal, then it is wise to difference the data and treat the differences as observations on a stationary AR (1) process. Estimation by Ordinary Least Squares then appears to be at least as satisfactory as nonlinear least squares. Use of differencing for an already stationary process can be motivated by Bayesian concepts: differencing can be viewed as an easy way to incorporate non-diffuse prior judgement—that the process is nearly non-stationary—into one's analysis. Random walks and near random walks are often encountered in economics. Unless one's sample size is large, the same statistical analyses apply to either.
Read full abstract