Abstract

We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measure the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds. The most interesting outcomes are: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be highly rigid. For the 2n X 2n Walsh-Hadamard transform Hn (a.k.a. Sylvester matrices, a.k.a. the communication matrix of Inner Product modulo 2), we show how to modify only 2e n entries in each row and make the rank of Hn drop below 2n(1-Ω(e2/log(1/ee))), for all small e > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices such as Hn, via L. Valiant's matrix rigidity approach. We also show non-trivial rigidity upper bounds for Hn with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. First, we show that explicit n X n Boolean matrices which maintain rank at least 2(logn)1-δ after n2/2(logn)δ/2 modified entries (over any field, for any δ > 0) would yield an explicit function that does not have sub-quadratic-size AC0 circuits with two layers of arbitrary linear threshold gates. Second, we prove that explicit 0/1 matrices over ℝ which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply exponential-gate lower bounds for the infamously difficult class of depth-two linear threshold circuits with arbitrary weights on both layers. In particular, we show that matrices defined by these seemingly-difficult circuit classes actually have low probabilistic rank and sign-rank, respectively. An Equivalence Between Communication, Probabilistic Rank, and Rigidity. It has been known since Razborov [1989] that explicit rigidity lower bounds would resolve longstanding lower-bound problems in communication complexity, but it seemed possible that communication lower bounds could be proved without making progress on matrix rigidity. We show that for every function f which is randomly self-reducible in a natural way (the inner product mod 2 is an example), bounding the communication complexity of f (in a precise technical sense) is equivalent to bounding the rigidity of the matrix of f, via an equivalence with probabilistic rank.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call