Abstract
Let $n,k\geq 1$ and let $G$ be the $n\times n$ random matrix with i.i.d. standard real Gaussian entries. We show that there are constants $c_{k},C_{k}>0$ depending only on $k$ such that the smallest singular value of $G^{k}$ satisfies \[ c_{k}\,t\leq{\mathbb {P}} \big \{s_{\min }(G^{k})\leq t^{k}\,n^{-1/2}\big \}\leq C_{k}\,t,\quad t\in (0,1], \] and, furthermore, \[ c_{k}/t\leq{\mathbb {P}} \big \{\|G^{-k}\|_{HS}\geq t^{k}\,n^{1/2}\big \}\leq C_{k}/t,\quad t\in [1,\infty ), \] where $\|\cdot \|_{HS}$ denotes the Hilbert–Schmidt norm.
Highlights
In 1940-es, von Neumann and Goldstine [4] conjectured that the “typical” value of smin(G) is of order n−1/2, while the condition number κ(G) = smax(G)/smin(G) is of order n
Returning to linear systems with random coefficients, it seems natural to consider the situation when we are given a linear system of the form Gkx = b, where k ≥ 1 is fixed, and would like to estimate the relative error of the obtained solution when b is known up to some additive error
We could ask what is the typical value of the condition number of Gk and, what are optimal large deviation estimates for κ(Gk)? Since the largest singular value of Gk is of order Θk(nk/2) with a very large probability, the question essentially amounts to computing small ball probabilities for smin(Gk)
Summary
Everywhere in the paper, G denotes an n × n random matrix with i.i.d. real valued standard Gaussian entries. Returning to linear systems with random coefficients, it seems natural to consider the situation when we are given a linear system of the form Gkx = b, where k ≥ 1 is fixed, and would like to estimate the relative error of the obtained solution when b is known up to some additive error. In this case, we could ask what is the typical value of the condition number of Gk and, what are optimal large deviation estimates for κ(Gk)? · HS denotes the Hilbert–Schmidt norm of a matrix
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have