Abstract

A global optimization algorithm, αBB, for twice-differentiable NLPs is presented. It operates within a branch-and-bound framework and requires the construction of a convex lower bounding problem. A technique to generate such a valid convex underestimator for arbitrary twice-differentiable functions is described. The αBB has been applied to a variety of problems and a summary of the results obtained is provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call