Abstract
The usual computing procedures in discriminant analysis involve both classificatory and separatory functions (Geisser 1977). The first of these concerns classification of samples from a mixture of populations. Statistical assessment of classification procedures is based on rates of correct classification (Glick 1972,1973; Lachenbruch 1975; Michaelis 1973) or on the loss due to misclassification (Anderson 1958, Lachenbruch and Goldstein 1979). Separatory methods, on the other hand, deal with the transformation of data so that population differences are highlighted. This is done by means of variates, which define a subspace of reduced dimensionality wherein data often can be displayed to advantage. The canonical approach was first suggested by the work of Fisher (1936), and is closely associated with the multivariate analysis of variance (Anderson 1958, Rao 1965). Applied researchers often fail to recognize the statistical relationships between classificatory and separatory discrimination, in large part because mathematical forms, system dimensionalities, and even objectives differ between the two approaches. Kshirsagar and Arseven (1975) previously used a sample-based argument to show that full-rank canonical transforms can be used for classification. However, a key feature of the canonical analysis is the reduction of dimensionality. Thus it is important to know whether the same property holds for the reduced set of canonical variates. A simple matrix argument is used below to show that for certain distributions the posterior probabilities of discriminant analysis are invariant under canonical transformation. This result is used to justify the application of canonical variates for classification, thus integrating both approaches to discrimination.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have