Abstract
This document introduces a method to solve linear optimization problems. The method’s strategy is based on the bounding condition that each constraint exerts over the dimensions of the problem. The solution of a linear optimization problem is at the intersection of the constraints defining the extreme vertex. The method decomposes the n-dimensional linear problem into n-1 two-dimensional problems. After studying the role of constraints in these two-dimensional problems, we identify the constraints intersecting at the extreme vertex. We then formulate a linear equation system that directly leads to the solution of the optimization problem. The algorithm is remarkably different from previously existing linear programming algorithms in the sense that it does not iterate; it is deterministic. A fully c-sharp-coded algorithm is made available. We believe this algorithm and the methods applied for classifying constraints according to their role open up a useful framework for studying complex linear problems through feasible-space and constraint analysis.
Highlights
Since the apparition of Dantzig’s simplex algorithm in 1947, linear optimization has become a widely used method to model multidimensional decision problems
This paper presents a deterministic method for solving linear optimization problems
Despite the number of vertexes visited, the relatively simple calculations required to update the data of neighbor-vertexes at each step keep the simplex method computationally effective for most practical situations and the most popular linear optimization algorithm
Summary
Since the apparition of Dantzig’s simplex algorithm in 1947, linear optimization has become a widely used method to model multidimensional decision problems. Appearing even before the establishment of digital computers, the first version of the simplex algorithm has remained for long time as the most effective way to solve large scale linear problems. Mead [2] introduced an algorithm that evaluates the coordinates of the vertexes of a growing simplex to find the optimal value of minimization problems. This algorithm disregards derivatives of the objective function and is effected when the problem’s dimension grows. Their proposal is mostly applicable to problems of two and three dimensions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.