Abstract

When nonsmooth, convex minimization problems are solved by subgradient optimization methods, the subgradients used will in general not accumulate to subgradients which verify the optimality of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulfillment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradient schemes As a means of overcoming these weaknesses in subgradient methods, we introduce the computation of an ergodic (averaged) sequence of subgradients. Specifically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and show that the elements of the ergodic sequence of subgradients in the limit fulfill the optimality conditions at the optimal solution, to which the sequence of iterates converges This result has three important implications. First, it enables the finite identification of active constraints at the solution obtained in the limit. Second, it is used to establish the ergodic convergence of sequences of Lagrange multipliers; this result enables us to carry out sensitivity analyses for solutions obtained by subgradient methods. The third implication is the convergence of a lower bounding procedure based on an ergodic sequence of affine underestimates of the objective function; this procedure provides a proper termination criterion for subgradient methods

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call