Abstract
Mean-variance portfolio optimization is more popular than optimization procedures that employ downside risk measures such as the semivariance, despite the latter being more in line with the preferences of a rational investor. We describe strengths and weaknesses of semivariance and how to minimize it for asset allocation decisions. We then apply this approach to a variety of simulated and real data and show that the traditional approach based on the variance generally outperforms it. The results hold even if the CVaR is used, because all downside risk measures are difficult to estimate. The popularity of variance as a measure of risk appears therefore to be rationally justified.
Highlights
The classical framework of modern portfolio theory is based on the assumption that the investor only cares about the first two moments of the return distribution: mean and variance
A more realistic assumption is that the investor cares about the mean and some downside risk measure of the returns, such as the downside deviation, which measures variability only below the benchmark set by the investor
By comparing the Sortino ratio achieved by the different strategies in different settings, we study the conditions that are necessary for portfolios that use the semivariance matrix as input in order to outperform portfolios based on the covariance matrix
Summary
The classical framework of modern portfolio theory is based on the assumption that the investor only cares about the first two moments of the return distribution: mean and variance. This procedure yields an exogenous matrix that well approximates the semicovariance matrix and can be used for portfolio optimization It has been pointed out in Estrada (2008) that it would be deceiving to compare the results obtained from optimization procedures that employ the variance with those obtained from procedures that employ the semivariance using an index based on either mean-variance or mean-semivariance efficiency. Given an estimation window of a certain length, there will be a level of (positive or negative) skewness beyond which the bias caused by using a wrong objective function (minimizing all the deviations from the mean instead of only those below it) outweighs the benefit of having lower estimation errors This level of skewness becomes lower as the estimation window becomes longer, or, equivalently, fewer observations are needed by mean-semivariance optimization to catch up as the distribution becomes more skewed.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have