Abstract

Finite-Time Performance of Gradient-Based Stochastic Approximation Algorithms Stochastic approximation (SA) algorithms are among the most commonly used methods for addressing stochastic optimization problems. In “Technical Note: On the Convergence Rate of Stochastic Approximation for Gradient-Based Stochastic Optimization,” J. Hu and M. C. Fu provide a new error bound for SA algorithms when applied to a class of problems with convex differentiable structures. The bound allows a detailed characterization of an algorithm’s finite-time performance through directly analyzing the bias and variance of the employed gradient estimator. The main result is then applied to study and compare the efficiency of traditional Kiefer-Wolfowitz algorithms and those based on random direction finite-difference gradient estimators such as simultaneous perturbation SA. The analysis leads to new insights into when it may be advantageous to use such randomized gradient estimates under various problem and algorithm parameter settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call