Abstract

We generalize primal--dual interior-point methods for linear programming (LP) problems to the convex optimization problems in conic form. Previously, the most comprehensive theory of symmetric primal--dual interior-point algorithms was given by Nesterov and Todd for feasible regions expressed as the intersection of a symmetric cone with an affine subspace. In our setting, we allow an arbitrary convex cone in place of the symmetric cone. Even though some of the impressive properties attained by Nesterov--Todd algorithms are impossible in this general setting of convex optimization problems, we show that essentially all primal--dual interior-point algorithms for LP can be extended easily to the general setting. We provide three frameworks for primal--dual algorithms, each framework corresponding to a different level of sophistication in the algorithms. As the level of sophistication increases, we demand better formulations of the feasible solution sets. Our algorithms, in return, attain provably better theoretical properties. We also make a very strong connection to quasi-Newton methods by expressing the square of the symmetric primal--dual linear transformation (the so-called scaling) as a quasi-Newton update in the case of the least sophisticated framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call