Abstract

It is well known that in simple linear regression, measurement errors in the explanatory variable lead to a downward bias in the OLS slope estimator. In two-pass regression tests of asset-pricing models, one is confronted with such measurement errors as the second-pass cross-sectional regression uses as explanatory variables imprecise estimates of asset betas extracted from the first-pass time-series regression. The slope estimator of the second-pass regression is used to get an estimate of the pricing-model’s factor risk-premium. Since the significance of this estimate is decisive for the validity of the model, knowledge of the properties of the slope estimator, in particular, its bias, is crucial. First, we show that cross-sectional correlations in the idiosyncratic errors of the first-pass time-series regression lead to correlated measurement errors in the betas used in the second-pass cross-sectional regression. We then study the effect of correlated measurement errors on the bias of the OLS slope estimator. Using Taylor approximation, we develop an analytic expression for the bias in the slope estimator of the second-pass regression with a finite number of test assets N and a finite time-series sample size T. The bias is found to depend in a non-trivial way not only on the size and correlations of the measurement errors but also on the distribution of the true values of the explanatory variable (the betas). In fact, while the bias increases with the size of the errors, it decreases the more the errors are correlated. We illustrate and validate our result using a simulation approach based on empirical return data commonly used in asset-pricing tests. In particular, we show that correlations seen in empirical returns (e.g., due to industry effects in sorted portfolios) substantially suppress the bias.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call