Abstract

In this paper, the deterministic global optimization algorithm, αBB ( α-based Branch and Bound) is presented. This algorithm offers mathematical guarantees for convergence to a point arbitrarily close to the global minimum for the large class of twice-differentiable NLPs. The key idea is the construction of a converging sequence of upper and lower bounds on the global minimum through the convex relaxation of the original problem. This relaxation is obtained by (i) replacing all nonconvex terms of special structure (i.e., bilinear, trilinear, fractional, fractional trilinear, univariate concave) with customized tight convex lower bounding functions and (ii) by utilizing some α parameters as defined by Maranas and Floudas (1994b) to generate valid convex underestimators for nonconvex terms of generic structure. In most cases, the calculation of appropriate values for the α parameters is a challenging task. A number of approaches are proposed, which rigorously generate a set of α parameters for general twice-differentiable functions. A crucial phase in the design of such procedures is the use of interval arithmetic on the Hessian matrix or the characteristic polynomial of the function being investigated. Thanks to this step, the proposed schemes share the common property of computational tractability and preserve the global optimality guarantees of the algorithm. However, their accuracy and computational requirements differ so that no method can be shown to perform consistently better than others for all problems. Their use is illustrated on an unconstrained and a constrained example. The second part of this paper ( Adjiman et al., 1998) is devoted to the discussion of issues related to the implementation of the αBB algorithm and to extensive computational studies illustrating its potential applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call