Abstract

In this paper we propose tests for the null hypothesis that a time series process displays a constant level against the alternative that it displays (possibly) multiple changes in level. Our proposed tests are based on functions of appropriately standardized sequences of the differences between sub-sample mean estimates from the series under investigation. The tests we propose differ notably from extant tests for level breaks in the literature in that they are designed to be robust as to whether the process admits an autoregressive unit root (the data are I ( 1 ) ) or stable autoregressive roots (the data are I ( 0 ) ). We derive the asymptotic null distributions of our proposed tests, along with representations for their asymptotic local power functions against Pitman drift alternatives under both I ( 0 ) and I ( 1 ) environments. Associated estimators of the level break fractions are also discussed. We initially outline our procedure through the case of non-trending series, but our analysis is subsequently extended to allow for series which display an underlying linear trend, in addition to possible level breaks. Monte Carlo simulation results are presented which suggest that the proposed tests perform well in small samples, showing good size control under the null, regardless of the order of integration of the data, and displaying very decent power when level breaks occur.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call