Abstract
In this paper, we provide a proof for the Hanson–Wright inequalities for sparse quadratic forms in subgaussian random variables. This provides useful concentration inequalities for sparse subgaussian random vectors in two ways. Let $X=(X_{1},\ldots,X_{m})\in\mathbf{R}^{m}$ be a random vector with independent subgaussian components, and $\xi=(\xi_{1},\ldots,\xi_{m})\in\{0,1\}^{m}$ be independent Bernoulli random variables. We prove the large deviation bound for a sparse quadratic form of $(X\circ\xi)^{T}A(X\circ\xi)$, where $A\in\mathbf{R}^{m\times m}$ is an $m\times m$ matrix, and random vector $X\circ\xi$ denotes the Hadamard product of an isotropic subgaussian random vector $X\in\mathbf{R}^{m}$ and a random vector $\xi\in\{0,1\}^{m}$ such that $(X\circ\xi)_{i}=X_{i}\xi_{i}$, where $\xi_{1},\ldots,\xi_{m}$ are independent Bernoulli random variables. The second type of sparsity in a quadratic form comes from the setting where we randomly sample the elements of an anisotropic subgaussian vector $Y=HX$ where $H\in\mathbf{R}^{m\times m}$ is an $m\times m$ symmetric matrix; we study the large deviation bound on the $\ell_{2}$-norm $\lVert D_{\xi}Y\rVert_{2}^{2}$ from its expected value, where for a given vector $x\in\mathbf{R}^{m}$, $D_{x}=\operatorname{diag}(x)$ denotes the diagonal matrix whose main diagonal entries are the entries of $x$. This form arises naturally from the context of covariance estimation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.