Abstract

Two methods for solving an optimization problem with piecewise linear, convex, and continuous objective function and linear restrictions are described. The first one represents a generalization of the ordinary Simplex-Algorithm by Dantzig, the second one an adaptation of the Reduced Gradient Method by P. Wolfe to the discussed problem. Contrary to the usually employed algorithms, both methods have the advantage of working without an increase of the number of variables or restrictions. An algorithmic presentation and deliberations on the appropriateness of different versions of the methods are provided.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call