Abstract

Extension of the mirror descent method developed for convex stochastic optimization problems to constrained convex stochastic optimization problems (subject to functional inequality constraints) is studied. A method that performs an ordinary mirror descent step if the constraints are insignificantly violated and performs a mirror descent step with respect to the violated constraint if this constraint is significantly violated is proposed. If the method parameters are chosen appropriately, a bound on the convergence rate (that is optimal for the given class of problems) is obtained and sharp bounds on the probability of large deviations are proved. For the deterministic case, the primal–dual property of the proposed method is proved. In other words, it is proved that, given the sequence of points (vectors) generated by the method, the solution of the dual method can be reconstructed up to the same accuracy with which the primal problem is solved. The efficiency of the method as applied for problems subject to a huge number of constraints is discussed. Note that the bound on the duality gap obtained in this paper does not include the unknown size of the solution to the dual problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.