Abstract

In this paper, we develop a new measure of specification error, and thus derive new statistical tests, for conditional factor models, i.e. models in which the factor loadings (and hence risk premia) are allowed to be time-varying. Our test exploits the close links between the stochastic discount factor framework and mean-variance efficiency. We show that a given set of factors is a true conditional asset pricing model if and only if the efficient frontiers spanned by the traded assets and the factor-mimicking portfolios, respectively, intersect. In fact, we show that our test is proportional to the difference in squared Sharpe ratios of these two frontiers. We draw three main conclusions from our empirical findings. First, optimal scaling clearly improves the performance of asset pricing models, to the point where several of the scaled models are capable of explaining asset pricing anomalies. However, even the optimally scaled models fall short of being true conditional asset pricing models in that they fail to price actively managed portfolios correctly. Second, there is significant time-variation in factor loadings and hence risk premia, which plays a significant role in asset pricing. Moreover, the optimal factor loadings display a high degree of non-linearity in the conditioning variables, suggesting that the linear scaling prevalent in the literature is sub-optimal and does not capture the inter-temporal pattern of risk premia. Third, skewness and kurtosis do matter in the conditional setting, while adding little to unconditional performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call