Kernel estimators have been popular for decades in long-run variance estimation. To minimize the loss of efficiency measured by the mean-squared error in important aspects of kernel estimation, we propose a novel class of converging kernel estimators with three major properties: (a) the optimal bandwidth choice is model-free; (b) positive-definiteness is ensured through a principle-driven aggregation technique with no loss of theoretical efficiency; and (c) potentially misspecified prewhitening models and transformations of the time series do not harm the asymptotic efficiency. A shrinkage prewhitening transformation is proposed for more robust finite-sample performance. The estimator has a positive bias that diminishes with the sample size so that it is more conservative compared with the typically negatively biased classical estimators. The proposal improves upon standard kernel functions and can be well generalized to the multivariate case. We discuss its performance through simulation results and a real-data application in the forecast breakdown test.
Read full abstract