Abstract

Due to changing matrix elements, many of the computational benefits embodied in sparse matrix theory and implemented in commercial LP codes for maintaining a sparse matrix inverse updates are lost for NLP. This paper reports on the results of investigating the use of structural decomposition in large, sparse NLP problems using the GRG algorithm. The approach is to partition the basis matrix into block lower triangular (BLT) form. At each step of the GRG algorithm, all operations are based upon the smaller diagonal subsets of variables. This approach led to the development of an algorithm to dynamically order a square matrix into block, lower triangular form after a column replacement. The method is fast, showing computational time reductions of up to a factor of 10 over performing the ordering on the complete occurrence matrix, while requiring a minimal amount of computer memory. Only one subset of the occurrence matrix followed by only the condensed occurrence matrix is required in computer memory to order the modified matrix. The algorithm is applicable to any numerical method which requires structural modification of a matrix by changing the structure of one column. An experimental GRG computer code, called GRGLSS, was developed for testing the technique. Examples demonstrate significantly faster computation speeds in the feasibility phase of the GRG algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.