Abstract

Combining forecasts is an established approach for improving forecast accuracy. So-called optimal weights (OWs) estimate combination weights by minimizing errors on past forecasts. Yet the most successful and common approach ignores all training data and assigns equal weights (EWs) to forecasts. We analyze this phenomenon by relating forecast combination to statistical learning theory, which decomposes forecast errors into three components: bias, variance, and irreducible error. In this framework, EWs minimize the variance component (errors resulting from estimation uncertainty) but ignore the bias component (errors from under-sensitivity to training data). OWs, in contrast, minimize the bias and ignore the variance component. Reducing one component in general increases the other. To address this trade-off between bias and variance, we first derive the expected squared error of a combination using weights between EWs and OWs (technically, OWs shrunk toward EWs) and decompose it into the three error components. We then use the components to derive the shrinkage factor between EWs and OWs that minimizes the expected error. We evaluate the approach on forecasts from the Federal Reserve Bank of Philadelphia’s Survey of Professional Forecasters. For these forecasts, we first show that assumptions regarding the error distribution that are commonly used in theoretical analyses are likely to be violated in practice. We then demonstrate that our approach improves over EWs and OWs if the assumptions are met, for instance, as the result of using a standardization procedure for the training data. This paper was accepted by Han Bleichrodt, decision analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call