Abstract

In this paper, we prove the edge universality of largest eigenvalues for separable covariance matrices of the form $\mathcal Q :=A^{1/2}XBX^*A^{1/2}$. Here $X=(x_{ij})$ is an $n\times N$ random matrix with $x_{ij}=N^{-1/2}q_{ij}$, where $q_{ij}$ are $i.i.d.$ random variables with zero mean and unit variance, and $A$ and $B$ are respectively $n \times n$ and $N\times N$ deterministic non-negative definite symmetric (or Hermitian) matrices. We consider the high-dimensional case, i.e. ${n}/{N}\to d \in (0, \infty)$ as $N\to \infty$. Assuming $\mathbb E q_{ij}^3=0$ and some mild conditions on $A$ and $B$, we prove that the limiting distribution of the largest eigenvalue of $\mathcal Q$ coincide with that of the corresponding Gaussian ensemble (i.e. the $\mathcal Q$ with $X$ being an $i.i.d.$ Gaussian matrix) as long as we have $\lim_{s \rightarrow \infty}s^4 \mathbb{P}(\vert q_{ij} \vert \geq s)=0$, which is a sharp moment condition for edge universality. If we take $B=I$, then $\mathcal Q$ becomes the normal sample covariance matrix and the edge universality holds true without the vanishing third moment condition. So far, this is the strongest edge universality result for sample covariance matrices with correlated data (i.e. non-diagonal $A$) and heavy tails, which improves the previous results in \cite{BPZ1,LS} (assuming high moments and diagonal $A$), \cite{Anisotropic} (assuming high moments) and \cite{DY} (assuming diagonal $A$).

Highlights

  • Sample covariance matrices are fundamental objects in multivariate statistics

  • We focus on the limiting distribution of the largest eigenvalues of high-dimensional sample covariance matrices, which is of great interest to the principal component analysis

  • If we take B = I, Q1 becomes the normal sample covariance matrix and Theorem 2.7 indicates that the edge universality of the largest eigenvalue of Q1 holds true for correlated data with heavy tails as in (1.1)

Read more

Summary

Introduction

Sample covariance matrices are fundamental objects in multivariate statistics. Given a centered random vector y ∈ Rn and its i.i.d. copies yi, i = 1, · · · , N , the sample covariance matrix Q := N −1 i yiyi∗ is the simplest estimator for the covariance matrix A := Eyy∗. Consider an n×N matrix X consisting of independent centered entries with general variance profile E|xij |2 = σij /N , an optimal local law was prove in [1, 2] for the resolvent (XX∗ − z)−1 under the arbitrarily high moments assumption Note that this gives the local law for G in the case where both A and B are diagonal. Their argument is quite specific and cannot be adapted to the general case with both A and B being non-diagonal This is still a welcome result, which shows that for sample covariance matrices, the condition (1.2) is not necessary and the edge universality holds as long as (1.1) holds. If two random variables X and Y have the same distribution, we write X =d Y

Separable covariance matrices
Resolvents and limiting law
Main result
Statistical applications
Basic notations and tools
Main tools
Basic tools
Entrywise local law
Interpolation and expansion
Conclusion of the proof with words
Non-vanishing third moment
Findings
Weak averaged local law
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.