Abstract

In two-pass regression-tests of asset-pricing models, cross-sectional correlations in the errors of the first-pass time-series regression lead to correlated measurement errors in the betas used as explanatory variables in the second-pass cross-sectional regression. The slope estimator of the second-pass regression is an estimate for the factor risk-premium and its significance is decisive for the validity of the pricing model. While it is well known that the slope estimator is downward biased in presence of uncorrelated measurement errors, we show in this paper that the correlations seen in empirical return data substantially suppress this bias. For the case of a single-factor model, we calculate the bias of the OLS slope estimator in the presence of correlated measurement errors with a first-order Taylor-approximation in the size of the errors. We show that the bias increases with the size of the errors, but decreases the more the errors are correlated. We illustrate and validate our result using a simulation approach based on empirical data commonly used in asset-pricing tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call