Abstract

We consider linear programs where some parameters in the objective functions are unknown but data are available. For a risk-averse modeler, the solutions of these linear programs should be picked in a way that can perform well for a range of likely scenarios inferred from the data. The conventional approach uses robust optimization. Taking the optimality gap as our loss criterion, we argue that this approach can be high-risk, in the sense that the optimality gap can be large with significant probability. We then propose two computationally tractable alternatives: The first uses bootstrap aggregation, or so-called bagging in the statistical learning literature, while the second uses Bayes estimator in the decision-theoretic framework. Both are simulation-based schemes that aim to improve the distributional behavior of the optimality gap by reducing its frequency of hitting large values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call