Abstract

Abstract Portfolio optimization often struggles in realistic out-of-sample contexts. We deconstruct this stylized fact by comparing historical forecasts of portfolio optimization inputs with subsequent out-of-sample values. We confirm that historical forecasts are imprecise guides of subsequent values, but we discover the resultant forecast errors are not entirely random. They have predictable patterns and can be partially reduced using their own history. Learning from past forecast errors to calibrate inputs (akin to empirical Bayesian learning) generates portfolio performance that reinforces the case for optimization. Furthermore, the portfolios achieve performance that meets expectations, a desirable yet elusive feature of optimization methods. Authors have furnished an Internet Appendix, which is available on the Oxford University Press Web site next to the link to the final published paper online.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.