Abstract

Effective approaches to forecast model selection are crucial to improve forecast accuracy and to facilitate the use of forecasts for decision-making processes. Information criteria or cross-validation are common approaches of forecast model selection. Both methods compare forecasts with the respective actual realizations. However, no existing selection method assesses out-of-sample forecasts before the actual values become available—a technique used in human judgment in this context. Research in judgmental model selection emphasizes that human judgment can be superior to statistical selection procedures in evaluating the quality of forecasting models. We, therefore, propose a new way of statistical model selection based on these insights from human judgment. Our approach relies on an asynchronous comparison of forecasts and actual values, allowing for an ex ante evaluation of forecasts via representativeness. We test this criterion on numerous time series. Results from our analyses provide evidence that forecast performance can be improved when models are selected based on their representativeness. This paper was accepted by Manel Baucells, behavioral economics and decision analysis. Supplemental Material: The online appendix and data are available at https://doi.org/10.1287/mnsc.2022.4485 .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.