Abstract
$ \newcommand{\cclass}[1]{{\textsf{#1}}} $The classical Grothendieck inequality has applications to the design of approximation algorithms for $\cclass{NP}$-hard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a polynomial-time constant-factor approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principal component analysis and the orthogonal Procrustes problem.
Highlights
In what follows, the standard scalar product on Cn is denoted ·, ·, i. e., x, y = ∑ni=1 xiyi for all x, y ∈ Cn
A simple transformation [2] relates the Grothendieck problem to the Frieze-Kannan Cut Norm problem [12], and as such the constant-factor approximation algorithm for the Grothendieck problem has found a variety of applications in combinatorial optimization; see the survey [26] for much more on this topic
A rigorous analysis of a polynomial-time approximation algorithm for this problem appears in the work of Nemirovski [35], where the generalized orthogonal Procrustes problem is treated as an important special case of a more general family√of problems called “quadratic optimization under orthogonality constraints,” for which he obtains a O( 3 n + d + log K) approximation algorithm
Summary
E., x, y = ∑ni=1 xiyi for all x, y ∈ Cn. We always think of Rn as canonically embedded in Cn; in particular the restriction of ·, · to Rn is the standard scalar product on Rn. Given a set S, the space Mn(S) stands for all the matrices M = (Mi j)ni, j=1 with Mi j ∈ S for all i, j ∈ {1, . There exists a polynomial-time algorithm that takes as input M ∈ Mn(Mn(R)) and outputs. There exists a polynomial-time algorithm that takes as input M ∈ Mn(Mn(C)) and outputs. The novelty of the applications to combinatorial optimization that are described below is the mere existence of a constant-factor approximation algorithm. All the other examples lead to new algorithmic results. Many of the applications below follow from a more versatile reformulation of Theorem 1.1 that is presented in Section 5 (see Proposition 5.1)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.