Abstract

AbstractIn this paper, we consider constrained optimization problems with convex objective and smooth convex functional constraints. We propose a new stochastic gradient algorithm, called the Stochastic Moving Ball Approximation (SMBA) method, to solve this class of problems, where at each iteration we first take a (sub)gradient step for the objective function and then perform a projection step onto one ball approximation of a randomly chosen constraint. The computational simplicity of SMBA, which uses first-order information and considers only one constraint at a time, makes it suitable for large-scale problems with many functional constraints. We provide a convergence analysis for the SMBA algorithm using basic assumptions on the problem, that yields new convergence rates in both optimality and feasibility criteria evaluated at some average point. Our convergence proofs are novel since we need to deal properly with infeasible iterates and with quadratic upper approximations of constraints that may yield empty balls. We derive convergence rates of order $${\mathcal {O}} (k^{-1/2})$$ O ( k - 1 / 2 ) when the objective function is convex, and $${\mathcal {O}} (k^{-1})$$ O ( k - 1 ) when the objective function is strongly convex. Preliminary numerical experiments on quadratically constrained quadratic problems demonstrate the viability and performance of our method when compared to some existing state-of-the-art optimization methods and software.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.