Abstract

ABSTRACTWe present a method to solve constrained convex stochastic optimization problems when the objective is a finite sum of convex functions . Our method is based on Incremental Stochastic Subgradient Algorithms and String-Averaging techniques, with an assumption that the subgradient directions are affected by random errors in each iteration. Our analysis allows the method to perform approximate projections onto the feasible set in each iteration. We provide convergence results for the case where a diminishing step-size rule is used. We test our method in a large set of random instances of a stochastic convex programming problem and we compare its performance with the robust mirror descent stochastic approximation algorithm proposed in Nemirovski et al. (Robust stochastic approximation approach to stochastic programming, SIAM J Optim 19 (2009), pp. 15741609).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call