Abstract
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulfilment of optimality conditions, since the subgradients used in the method will, in general, not accumulate to subgradients that verify optimality of a solution obtained in the limit. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers, is not directly available in subgradient schemes.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have