Abstract

This paper studies primal convergence in dual first-order methods for convex optimization. Specifically, we consider Lagrange decomposition of a general class of inequality- and equality-constrained optimization problems with strongly convex, but not necessarily differentiable, objective functions. The corresponding dual problem is solved using a first-order method, and the minimizer of the Lagrangian computed when evaluating the dual function is considered as an approximate primal solution. We derive error bounds for this approximate primal solution in terms of the dual errors. Based on such error bounds, we show that the approximate primal solution converges to the primal optimum at a rate no worse than O(1/√k) if the projected dual gradient method is adopted and O(1/k) if a fast gradient method is utilized, where k is the number of iterations. Finally, via simulation, we compare the convergence behavior of different approximate primal solutions in various dual first-order methods in the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call