Abstract

We derive tight lower bounds on the smallest eigenvalue of a sample covariance matrix of a centred isotropic random vector under weak or no assumptions on its components.

Highlights

  • We provide tight lower bounds on the smallest eigenvalue of a sample covariance matrix of a centred isotropic random vector under weak or no assumptions on its components

  • Lower bounds on the smallest eigenvalue of a sample covariance matrix play a crucial role in the least squares problems in high-dimensional statistics

  • For a random vector Xp in Rp, consider a random p × n matrix Xpn with independent columns {Xpk}nk=1 distributed as Xp and the Gram matrix n

Read more

Summary

Introduction

Lower bounds on the smallest eigenvalue of a sample covariance matrix (or a Gram matrix) play a crucial role in the least squares problems in high-dimensional statistics (see, for example, [5]). For a random vector Xp in Rp, consider a random p × n matrix Xpn with independent columns {Xpk}nk=1 distributed as Xp and the Gram matrix n. If Xp is centred, n−1XpnXpn is the sample covariance matrix corresponding to the random sample {Xpk}nk=1. In this paper we derive sharp lower bounds for λp(n−1XpnXpn), where λp(A) is the smallest eigenvalue of a p × p matrix A. In proofs we use the same strategy as in [6]

Main results
Applications
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.