Abstract

We consider combinatorial optimization problems with uncertain linear objective functions. In the min–max–min robust optimization approach, a fixed number k of feasible solutions is computed such that the respective best of them is optimal in the worst case. The idea is to calculate the set of candidate solutions in a potentially expensive preprocessing phase and then select the best solution out of this set in real-time, once the actual scenario is known. In this paper, we investigate the complexity of the resulting min–max–min problem in the case of discrete uncertainty, as well as its connection to the classical min–max robust counterpart, for many classical combinatorial optimization problems. Essentially, it turns out that the min–max–min problem is not harder to solve than the min–max problem, while producing much better solutions in general for larger k. In particular, this approach may mitigate the so-called price of robustness, making it an attractive alternative to the classical robust optimization approach in many practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call