Abstract

Nonparametric test procedures in predictive regressions have limiting null distributions under both low and high regressor persistence, but low local power compared to misspecified linear predictive regressions. We argue that IV inference is better suited (in terms of local power) for analyzing additive predictive models with uncertain predictor persistence. Then, a two-step procedure is proposed for out-of-sample predictions. For the current estimation window, one first tests for predictability; in case of a rejection, one predicts using a nonlinear regression model, otherwise the historic average of the stock returns is used. This two-step approach performs better than competitors (though not by a large margin) in a pseudo-out-of-sample prediction exercise for the S&P 500.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call