Abstract
We study the rates at which optimal estimators in the sample average approximation approach converge to their deterministic counterparts in the almost sure sense and in mean. To be able to quantify these rates, we consider the law of the iterated logarithm in a Banach space setting and first establish under relatively mild assumptions almost sure convergence rates for the approximating objective functions, which can then be transferred to the estimators for optimal values and solutions of the approximated problem. By exploiting a characterisation of the law of the iterated logarithm in Banach spaces, we are further able to derive under the same assumptions that the estimators also converge in mean, at a rate which essentially coincides with the one in the almost sure sense. This, in turn, allows to quantify the asymptotic bias of optimal estimators as well as to draw conclusive insights on their mean squared error and on the estimators for the optimality gap. Finally, we address the notion of convergence in probability to derive rates in probability for the deviation of optimal estimators and (weak) rates of error probabilities without imposing strong conditions on exponential moments. We discuss the possibility to construct confidence sets for the optimal values and solutions from our obtained results and provide a numerical illustration of the most relevant findings.
Highlights
IntroductionLet (Ω, F, P) be a complete probability space on which we consider the stochastic programming problem min x ∈X f (x) := EP[h(x, ξ )] ,
Let (Ω, F, P) be a complete probability space on which we consider the stochastic programming problem min x ∈X f (x) := EP[h(x, ξ )], (1)where X ⊂ Rn denotes a nonempty finite-dimensional compact set with the usual (Euclidean) metric, ξ a random vector whose distribution Pξ is supported on a set Ξ ⊂ Rm, and h : X × Ξ → R a function depending on some parameter x ∈ X and the random vector ξ
By use of the compact law of the iterated logarithm (LIL) in the Banach spaces C(X ) and C1(X ), we provide in Sect. 3.2 our main findings on almost sure rates of convergence for estimators of optimal values and solutions
Summary
Let (Ω, F, P) be a complete probability space on which we consider the stochastic programming problem min x ∈X f (x) := EP[h(x, ξ )] ,. The rates of error probabilities, i.e. the deviation probabilities between the optimal estimators and their corresponding unknown true values, have been quantified, due to their practical relevance This has been addressed, for instance, by Vogel [47,48] who uses a large deviation approach to estimate the probability that the solution set of an approximating problem is not contained in an -neighbourhood of the original solution set in a standard stochastic programme and to estimate the probability of particular events of both solution sets in a multiobjective programming framework, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.