Abstract

We develop a new variational Bayes estimation method for large-dimensional sparse vector autoregressive models with exogenous predictors. Unlike existing Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms, our approach is not based on a structural form representation of the model, but directly shrinkage the lead-lag cross sectional interdependencies in the transition matrix. We compare the performance of our approach against state-of-the-art MCMC and VB methods, both in simulation and when forecasting a large cross section of industry portfolios spanning almost a hundred years of monthly data. The main results provide robust evidence that by directly shrinking weak cross sectional signals one can substantially improve both the statistical and economic out-of-sample performance of multivariate models for return predictability. This result holds across a variety of alternative shrinkage priors, such as Bayesian adaptive lasso, normal-gamma and horseshoe priors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call