Abstract

When the observation dimension is of the same order of magnitude as the number of samples, the conventional estimators of covariance matrix and its inverse perform poorly. In order to obtain well-behaved estimators in high-dimensional settings, we consider a general class of estimators of covariance matrices and precision matrices (i.e. the inverse covariance matrix) based on weighted sampling and linear shrinkage. The estimation error is measured in terms of the matrix quadratic loss, and the latter is used to calibrate the set of parameters defining our proposed estimator. In an asymptotic setting where the observation dimension is of the same order of magnitude as the number of samples, we provide an estimator of the precision matrix that is as good as the oracle estimator. Our research is based on recent contributions in the field of random matrix theory and Monte-Carlo simulations show the advantage of our precision matrix estimator in finite sample size settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call