Abstract

We consider an optimization problem with strongly convex objective and linear inequalities constraints. To be able to deal with a large number of constraints we provide a penalty reformulation of the problem. As penalty functions we use a version of the one-sided Huber losses. The smoothness properties of these functions allow us to choose time-varying penalty parameters in such a way that the incremental procedure with the diminishing step-size converges √ to the exact solution with the rate O(1/√k). To the best of our knowledge, we present the first result on the convergence rate for the penalty-based gradient method, in which the penalty parameters vary with time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call