Abstract

We develop a new dynamic `big data' forecasting algorithm which uses 128,732 series from the Federal Reserve Economic Data (FRED) database. We evaluate it through a pseudo real-time forecasting exercise involving 50 all-transactions house price indexes at US state level, with an out-of-sample period across the recent Great Recession. We estimate approximately 482 million bivariate models drawn from a cross-section of dimension sparsely seen in the econometric literature, selecting and weighting variables across different horizons, forecast approaches, model set sizes, rolling and recursive windows and combination metrics. Our approach performs extremely favorably against each of the three commonly used benchmarks which we evaluate it against, including a comparable big data macroeconometric (FAVAR) model which is outperformed in approximately 80% of instances. We aim to provide several stylized results to guide big-data forecasting in macroeconometrics and related fields, with the algorithm showing a general preference for Bayesian Model Averaging over Smooth Akaike Information Criteria weightings, iterated multi-step over dynamic multi-step forecast generation and for recursive over rolling estimation windows. There is further strong evidence of forecast error increasing in the volatility of underlying series, and we show that our forecast errors are relatively stable, except for during the recent financial crisis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call