Abstract

In this paper, we develop an efficient interior point method (IPM) for convex quadratic programming (QP). The proposed algorithm keeps all relevant variables in a compressed format enabled by the tensor-train (TT) decomposition. The algorithm, called TT-IPM, requires much less memory, even when compared to IPMs that utilize sparse arrays, and is able to solve large-scale QPs. Under certain assumptions, we prove that the TT-IPM inherits the superlinear convergence property of traditional IPMs. Furthermore, the TT-IPM uses storage that scales polylogarithmically with the number of variables and the TT-ranks. Finally, we illustrate the computation time and storage savings of TT-IPM in a trajectory optimization problem. We show that even in fairly sparse optimization problems with percentage of nonzero elements in the problem data close to 0.0002 percent, the tensor-train representation allows TT-IPM to outperform state-of-the-art IPM that use sparse arrays.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call