Abstract

The last decade has been a fascinating time for researchers interested in linear programming and its extensions. It opened with Narendra Karmarkar's theoretical paper on the computational complexity of linear programming problems and his claims that his method solved problems 50 times faster than the simplex method. After 10 years of development in both interior-point and simplex methods, we are now at a point at which both approaches can handle problems that seemed intractable before, but where interior-point methods seem to be superior to simplex algorithms for many very large-scale sparse linear programming problems. Lustig, Marsten, and Shanno (LMS) provide an excellent overview of these advances, particularly with respect to their computational significance. Of course, they have been major contributors to the development of interior-point algorithms as well as of sophisticated implementations with their OB1 code. Their experience leads to many interesting insights and also raises some questions: why does the primal-dual method allow such long steps (99.95% of the way to the whereas primal or dual affine-scaling methods seem limited to, say, 95%)? Do the primal and dual “help each other?” And why does the primal-dual affine-scaling algorithm perform poorly, while even tiny centering components render it highly efficient? In these remarks I want to try to indicate the interplay between theory and computational practice, highlight theoretical developments with (potentially) important consequences and point out some major remaining tasks. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call