Abstract
We welcome Dr Thorpe's interesting discussion (Thorpe, 1988), and we would like to take this opportunity to clarify some points.Both MGPCA (multiple group principal component analysis) and CPCA (common principal component analysis) serve essentially the same purpose, namely estimation of principal components simultaneously in several groups, based on the assumption of equality of principal component directions across groups, while eigenvalues may differ between groups. However, CPCA has the distinct advantage that this assumption can actually be tested, using the (CPC) statistic. In analyses involving more than two variables, it is usually difficult to decide, without a formal test, whether or not the assumption of common directions of principal components is reasonable.There is also a conceptual difficulty with MGPCA. In statistical terms, both methods assume that:(a) a certain set of parameters (namely those determining the eigenvectors) are common to all groups(b) there are sets of parameters (namely p eigenvalues per group) which are specific to each group.CPCA sets up a model that reflects this structure and estimates the parameters accordingly. MGPCA, on the other hand, ignores part (b), at least temporarily, by pooling the variance‐covariance matrices and extracting eigenvectors from the single pooled matrix. This may lead to reasonable results, but there is no guarantee that it will indeed do so. The reader may find a more familiar analog in the fitting of regression lines when data are in groups. If it is assumed that all regression lines are parallel, one should set up an appropriate model based on a single slope parameter common to all groups, and groupwise intercepts. One should then estimate the parameters of this model, and not simply apply a technique which is appropriate in the one‐group case only.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have