Machine learning (ML) algorithms pose significant challenges in predicting unknown parameters for optimization models in decision-making scenarios. Conventionally, prediction models are optimized independently in decision-making processes, whereas ML algorithms primarily focus on minimizing prediction errors, neglecting the role of decision-making in downstream optimization tasks. The pursuit of high prediction accuracy may not always align with the goal of reducing decision errors. The idea of reducing decision errors has been proposed to address this limitation. This paper introduces an optimization process that integrates predictive regression models within a mean–variance optimization setting. This innovative technique introduces a general loss function to capture decision errors. Consequently, the predictive model not only focuses on forecasting unknown optimization parameters but also emphasizes the predicted values that minimize decision errors. This approach prioritizes decision accuracy over the potential accuracy of unknown parameter prediction. In contrast to traditional ML approaches that minimize standard loss functions such as mean squared error, our proposed model seeks to minimize the objective value derived directly from the decision-making problem. Furthermore, this strategy is validated by developing an optimization-based regression tree model for predicting stock returns and reducing decision errors. Empirical evaluations of our framework reveal its superiority over conventional regression tree methods, demonstrating enhanced decision quality. The computational experiments are conducted on a stock market dataset to compare the effectiveness of the proposed framework with the conventional regression tree-based approach. Remarkably, the results confirm the strengths inherent in this holistic approach.
Read full abstract