Abstract

The motivations for constructing algorithms with the properties specified in the title of this paper come from two sources. The first is that the ellipsoid method (see e.g. Shor (1982) and Sonnevend (1983)) has a slow (asymptotic) convergence for functions of the above two classes. The second arises since the popular idea (practice) that the globalization of convergence for the asymptotically fast quasi-Newton methods should be achieved by the application of line search strategies (these are described in Stoer (1980); bundle methods are described in Lemarechal et al. (1981)) becomes rather questionable if function and subgradient evaluations are costly and if the function is “stiff”, i.e. has badly conditioned or strongly varying second derivatives (Hesse matrixes). Indeed, line search uses — intuitively speaking — the local information about the function only for local prediction, while in the ellipsoid method the same information is used to obtain a global prediction (based on a more decisive use of the convexity). In the bundle (e-subgradient) methods the generation of a “useable” descent direction (not speaking about the corresponding line search) may require — for a nonsmooth f (in the “zero-th” steps) — a lot of function (subgradient evaluations). The important feature of the ellipsoid method, which will be used here to obtain a method with finite termination (i.e. exact computation of f*) for piecewise linear functions (which is very important for the solution of general linear programming problems), is that it provides us with (asymptotically exact) lower bounds for the value of*.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call