Abstract
We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.
Highlights
Stochastic programming is a powerful modeling paradigm for optimization under uncertainty
Using the Wasserstein metric, we construct a ball in the space of probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball
In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can be reformulated as finite convex programs—in many interesting cases even as tractable linear programs
Summary
The main contribution of this paper is to demonstrate that the worst-case expectation over a Wasserstein ambiguity set can be computed efficiently via convex optimization techniques for numerous loss functions of practical interest. Using recent measure concentration results from statistics, we demonstrate that the optimal value of a distributionally robust optimization problem over a Wasserstein ambiguity set provides an upper confidence bound on the out-of-sample cost of the worst-case optimal decision. We validate this theoretical performance guarantee in numerical tests.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have