Abstract

In this article, we consider statistical inference for high-dimensional approximate factor models. We posit a weak factor structure, in which the factor loading matrix can be sparse and the signal eigenvalues may diverge more slowly than the cross-sectional dimension, N. We propose a novel inferential procedure to decide whether each component of the factor loadings is zero or not, and prove that this controls the false discovery rate (FDR) below a preassigned level, while the power tends to unity. This “factor selection” procedure is primarily based on a debiased version of the sparse orthogonal factor regression (SOFAR) estimator; but is also applicable to the principal component (PC) estimator. After the factor selection, the resparsified SOFAR and sparsified PC estimators are proposed and their consistency is established. Finite sample evidence supports the theoretical results. We apply our method to the FRED-MD dataset of macroeconomic variables and the monthly firm-level excess returns which constitute the S&P 500 index. The results give very strong statistical evidence of sparse factor loadings under the identification restrictions and exhibit clear associations of factors and categories of the variables. Furthermore, our method uncovers a very weak but statistically significant factor in the residuals of Fama-French five factor regression.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.