Abstract
Sparse principal component analysis has become one of the most widely used techniques for dimensionality reduction in high-dimensional datasets. While many methods are available for point estimation of eigenstructure in high-dimensional settings, in this paper we propose methodology for uncertainty quantification, such as construction of confidence intervals and tests for the principal eigenvector and the corresponding largest eigenvalue. We base our methodology on an M-estimator with Lasso penalty which achieves minimax optimal rates and is used to construct a de-biased sparse PCA estimator. The novel estimator has a Gaussian limiting distribution and can be used for hypothesis testing or support recovery of the first eigenvector. The empirical performance of the new estimator is demonstrated on synthetic data and we also show that the estimator compares favourably with the classical PCA in moderately high-dimensional regimes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.