Abstract

In this paper, we consider two paradigms that are developed to account for uncertainty in optimization models: robust optimization (RO) and joint estimation-optimization (JEO). We examine recent developments on efficient and scalable iterative first-order methods for these problems, and show that these iterative methods can be viewed through the lens of online convex optimization (OCO). The standard OCO framework has seen much success for its ability to handle decision-making in dynamic, uncertain, and even adversarial environments. Nevertheless, our applications of interest present further flexibility in OCO via three simple modifications to standard OCO assumptions: we introduce two new concepts of weighted regret and online saddle point problems and study the possibility of making lookahead (anticipatory) decisions. Our analyses demonstrate that these flexibilities introduced into the OCO framework have significant consequences whenever they are applicable. For example, in the strongly convex case, minimizing unweighted regret has a proven optimal bound of $O(\log(T)/T)$, whereas we show that a bound of $O(1/T)$ is possible when we consider weighted regret. Similarly, for the smooth case, considering $1$-lookahead decisions results in a $O(1/T)$ bound, compared to $O(1/\sqrt{T})$ in the standard OCO setting. Consequently, these OCO tools are instrumental in exploiting structural properties of functions and resulting in improved convergence rates for RO and JEO. In certain cases, our results for RO and JEO match the best known or optimal rates in the corresponding problem classes without data uncertainty.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call