Abstract
This paper is concerned with convergence rates of stochastic optimization algorithms depending on the budget. The underlying problems naturally arise from a wide range of applications in Monte Carlo optimization and discrete event systems, for example, optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators, optimization of infinite-horizon models with discounting etc. Frequently, one wants to minimize a cost functional /spl alpha/(/spl middot/) over IR/sup r/. We are mainly interested in the situation where the value of /spl alpha/(/spl theta/) for a /spl theta//spl isin/IR (or its gradient) is difficult to compute, and only a gradient estimator is available, which can be computed by simulation. The quality of the estimator may depend on the parameter value /spl theta/ and the computing budget. Assuming that a gradient estimator is available and that both the bias and the variance of the estimator are functions of the budget, we use the gradient estimator in conjunction with a stochastic approximation (SA) algorithm. Our interest is to figure out, how to allocate the total available computational budget to the successive SA iterations. We find the convergence rates in terms of the number of iterations, and the total computational effort. Our results are also applicable to root-finding stochastic approximations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.