Abstract

Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative framework for optimization under uncertainty, which we term as a robust satisficing model. Instead of sizing the uncertainty set in robust optimization, the robust satisficing model is specified by a target objective with the aim of delivering the solution that is least impacted by uncertainty in achieving the target. At the heart of this framework, we minimize the level of constraint violation under all possible realizations within the support set. Our framework is based on a constraint function that evaluates to the optimal objective value of a standard conic optimization problem, which can be used to model a wide range of constraint functions that are convex in the decision variables but can be either convex or concave in the uncertain parameters. We derive an exact semidefinite optimization formulation when the constraint is biconvex quadratic with quadratic penalty and the support set is ellipsoidal. We also show the equivalence between the more general robust satisficing problems and the classical adaptive robust linear optimization models with conic uncertainty sets, where the latter can be solved approximately using affine recourse adaptation. More importantly, under complete recourse, and reasonably chosen polyhedral support set and penalty function, we show that the exact reformulation and safe approximations do not lead to infeasible problems if the chosen target is above the optimum objective obtained when the nominal optimization problem is minimized. Finally, we extend our framework to the data-driven setting and showcase the modelling and the computational benefits of the robust satisficing framework over robust optimization with three numerical examples: portfolio selection, log-sum-exp optimization and adaptive lot-sizing problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call