Abstract

In this paper we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strict convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems. The numerical results demonstrate the benefits of using regularization in IPMs as well as the reliability of the method.

Highlights

  • 1.1 Primal-dual pair of convex quadratic programming problemsIn this paper, we consider the following primal-dual pair of linearly constrained convex quadratic programming problems, in the standard form: minx cT x + 1 x T Qx, s.t

  • In Lemma 8, we proved that given Premise 1, Algorithm IP-Proximal Method of Multipliers (PMM) produces iterates that belong to the neighbourhood (19) and μk → 0

  • In order to stress out the importance of regularization, we compare Interior Point-Proximal Method of Multipliers (IP-PMM) with a non-regularized Interior Point Method (IPM)

Read more

Summary

Primal-dual pair of convex quadratic programming problems

We consider the following primal-dual pair of linearly constrained convex quadratic programming problems, in the standard form: minx cT x + 1 x T Qx , s.t. Ax = b, x ≥ 0, 2. Using the Lagrangian function, one can formulate the first-order optimality conditions [known as Karush–Kuhn–Tucker (KKT) conditions] for this primal-dual pair. For simplicity of exposition, when referring to convex quadratic programming problems, we implicitly assume that the problems are linearly constrained. If both (P) and (D) are feasible problems, it can be verified that there exists an optimal primal-dual triple (x, y, z), satisfying the KKT optimality conditions of this primal-dual pair (see for example [7, Proposition 2.3.4]).

A primal-dual interior point method
Primal proximal point method
Dual proximal point method
Proximal method of multipliers
Regularization in interior point methods
Algorithmic framework
Convergence analysis of IP-PMM
Infeasible problems
Computational experience
Implementation details
Free variables
Constraint matrix scaling
PMM parameters
Termination criteria
Numerical results
Linear programming problems
Quadratic programming problems
Verification of the theory
Large scale problems
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.