Abstract

We present a quantum interior-point method (IPM) for second-order cone programming (SOCP) that runs in time O~(nrζκδ2log⁡(1/ϵ)) where r is the rank and n the dimension of the SOCP, δ bounds the distance of intermediate solutions from the cone boundary, ζ is a parameter upper bounded by n, and κ is an upper bound on the condition number of matrices arising in the classical IPM for SOCP. The algorithm takes as its input a suitable quantum description of an arbitrary SOCP and outputs a classical description of a δ-approximate ϵ-optimal solution of the given problem.Furthermore, we perform numerical simulations to determine the values of the aforementioned parameters when solving the SOCP up to a fixed precision ϵ. We present experimental evidence that in this case our quantum algorithm exhibits a polynomial speedup over the best classical algorithms for solving general SOCPs that run in time O(nω+0.5) (here, ω is the matrix multiplication exponent, with a value of roughly 2.37 in theory, and up to 3 in practice). For the case of random SVM (support vector machine) instances of size O(n), the quantum algorithm scales as O(nk), where the exponent k is estimated to be 2.59 using a least-squares power law. On the same family random instances, the estimated scaling exponent for an external SOCP solver is 3.31 while that for a state-of-the-art SVM solver is 3.11.

Highlights

  • It is well known that many interesting and relevant optimization problems in the domain of Machine Learning can be expressed in the framework of convex optimization [6, 10]

  • We propose support vector machines (SVM) as a candidate for such an end-to-end quantum speedup using a quantum interior point method based algorithm

  • We provide a high level sketch of our results and the techniques used for the quantum interior point method for SOCPs, we begin by discussing the differences between classical and quantum interior point methods

Read more

Summary

Introduction

It is well known that many interesting and relevant optimization problems in the domain of Machine Learning can be expressed in the framework of convex optimization [6, 10]. Very recently, [13] have shown that it is possible to solve linear programs (LP) in O(nω), the time it takes to multiply two matrices (as long as ω ≥ 2 + 1/6, which is currently the case) This result has been further extended in [27] to a slightly more general class of cones, their techniques did not yield improved complexities for second-order (SOCP) and semidefinite programming (SDP). It remains an open question to find an end-to-end optimization problems for which quantum SDP solvers achieve an asymptotic speedup over state of the art classical algorithms. We propose support vector machines (SVM) as a candidate for such an end-to-end quantum speedup using a quantum interior point method based algorithm

Our results and techniques
Related work
Second-order cone programming
Euclidean Jordan algebras
Interior-point methods
Quantum linear algebra
A quantum interior-point method
Central path
A single quantum IPM iteration
Rescaling x and s
Maintaining strict feasibility
Maintaining closeness to central path
Final complexity and feasibility
Quantum Support-Vector Machines
Reducing SVM to SOCP
Experimental results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call