Abstract

This chapter presents an overview of the approaches to solve multi-parametric programming problems. It is organized as follows. In Section 1.1, a general multi-parametric nonlinear programming (mp-NLP) problem is formulated and the Karush-Kuhn-Tucker (KKT) optimality conditions are presented. Then, the three main groups of methods to find a local minimum of a NLP problem for a given parameter vector are reviewed (Newton-type methods, penalty function methods and direct search methods). The Basic Sensitivity Theorem, which addresses the local regularity conditions for the optimal solution as function of the parameters is reviewed. Then, algorithms to find an approximate explicit solution of mp-NLP problems are described, which are based on an orthogonal (k–d tree) partition of the parameter space. Both convex and non-convex mp-NLP problems are considered. Procedures and heuristic rules for efficient splitting of a region in the parameter space and for handling the infeasible cases are formulated. In Section 1.2, a multi-parametric quadratic programming (mp-QP) problem is formulated and two approaches to find its exact explicit solution are described.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call