Abstract

The vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.

Highlights

  • In stochastic optimization, stability usually refers to the continuity properties of optimal values and solution sets as mappings from a set of probability measures, endowed with a suitable distance, into the extended reals and solution space, respectively, see [22]

  • We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on partial differential equation (PDE)-constrained optimization as well as functional data analysis

  • For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions

Read more

Summary

Introduction

Stability usually refers to the continuity properties of optimal values and solution sets as mappings from a set of probability measures, endowed with a suitable distance, into the extended reals and solution space, respectively, see [22]. The smallest relevant family F of Borel measurable functions in our stability studies contains only those functions which appear in the stochastic optimization problem under consideration. In this case, dF may be called the minimal information (m.i.) distance. Stability results with respect to such m.i. distances serve as the starting point (i) to study stability with respect to the weak convergence of probability measures and (ii) to enlarge the family F properly by functions sharing essential analytical properties with the original ones The latter strategy may lead to probability metrics that enjoy desirable properties like dual representations and convergence characterizations.

Notation and preliminary results
The optimization problem
Qualitative stability
Quantitative stability
An application to PDE-constrained optimization under uncertainty
Monte Carlo approximation
Numerical illustration
Conclusion
A Results from fixed point theory
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call