Abstract

This paper is concerned with the problem of classifying an observation vector into one of two populations $\mathit{\Pi}_{1} : N_{p}(\mu_{1},\Sigma)$ and $\mathit{\Pi}_{2} : N_{p}(\mu_{2},\Sigma)$. Anderson (1973, Ann. Statist.) provided an asymptotic expansion of the distribution for a Studentized linear discriminant function, and proposed a cut-off point in the linear discriminant rule to control one of the two misclassification probabilities. However, as dimension $p$ becomes larger, the precision worsens, which is checked by simulation. Therefore, in this paper we derive an asymptotic expansion of the distribution of a linear discriminant function up to the order $p^{-1}$ as $N_1$, $N_2$, and $p$ tend to infinity together under the condition that $p/(N_{1}+N_{2}-2)$ converges to a constant in $(0, 1)$, and $N_{1}/N_{2}$ converges to a constant in $(0, \infty)$, where $N_i$ means the size of sample drown from $\mathit{\Pi}_i(i=1, 2)$. Using the expansion, we provide a cut-off point. A small-scale simulation revealed that our proposed cut-off point has good accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call