AbstractIn this paper, we consider constrained optimization problems with convex objective and smooth convex functional constraints. We propose a new stochastic gradient algorithm, called the Stochastic Moving Ball Approximation (SMBA) method, to solve this class of problems, where at each iteration we first take a (sub)gradient step for the objective function and then perform a projection step onto one ball approximation of a randomly chosen constraint. The computational simplicity of SMBA, which uses first-order information and considers only one constraint at a time, makes it suitable for large-scale problems with many functional constraints. We provide a convergence analysis for the SMBA algorithm using basic assumptions on the problem, that yields new convergence rates in both optimality and feasibility criteria evaluated at some average point. Our convergence proofs are novel since we need to deal properly with infeasible iterates and with quadratic upper approximations of constraints that may yield empty balls. We derive convergence rates of order $${\mathcal {O}} (k^{-1/2})$$ O ( k - 1 / 2 ) when the objective function is convex, and $${\mathcal {O}} (k^{-1})$$ O ( k - 1 ) when the objective function is strongly convex. Preliminary numerical experiments on quadratically constrained quadratic problems demonstrate the viability and performance of our method when compared to some existing state-of-the-art optimization methods and software.
Read full abstract