Abstract
Convergencerate results are derived for a stochastic optimization problem where a performance measure is minimized with respect to a vector parameter t. Assuming that a gradient estimator is available and that both the bias and the variance of the estimator are (known) functions of the budget devoted to its computation, the gradient estimator is employed in conjunction with a stochastic approximation (SA) algorithm. Our interest is to figure out how to allocate the total available computational budget to the successive SA iterations. The effort is devoted to solving the asymptotic version of this problem by finding the convergence rate of SA toward the optimizer, first as a function of the number of iterations and then as a function of the total computational effort. As a result the optimal rate of increase of the computational budget per iteration can be found. Explicit expressions for the case where the computational budget devoted to an iteration is a polynomial in the iteration number, and where the bias and variance of the gradient estimator are polynomials of the computational budget, are derived. Applications include the optimization of steady-state simulation models with likelihood ratio, perturbation analysis, or finite-difference gradient estimators; optimization of infinite-horizon models with discounting; optimization of functions of several expectations; and so on. Several examples are discussed. Our results readily generalize to general root-finding problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.