Abstract

Researchers often have datasets measuring features $x_{ij}$ of samples, such as test scores of students. In factor analysis and PCA, these features are thought to be influenced by unobserved factors, such as skills. Can we determine how many components affect the data? This is an important problem, because decisions made here have a large impact on all downstream data analysis. Consequently, many approaches have been developed. Parallel Analysis is a popular permutation method: it randomly scrambles each feature of the data. It selects components if their singular values are larger than those of the permuted data. Despite widespread use, as well as empirical evidence for its accuracy, it currently has no theoretical justification. In this paper, we show that parallel analysis (or permutation methods) consistently select the large components in certain high-dimensional factor models. However, when the signals are too large, the smaller components are not selected. The intuition is that permutations keep the noise invariant, while “destroying” the low-rank signal. This provides justification for permutation methods. Our work also uncovers drawbacks of permutation methods, and paves the way to improvements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call