Abstract

Classification accuracy has traditionally been expressed by the overall accuracy percentage computed from the sum of the diagonal elements of the error, confusion, or misclassification matrix resulting from the application of a classifier. This article assesses the adequacy of the overall accuracy measure and demonstrates that it can give misleading and contradictory results. The Kappa test statistic assesses interclassifier agreement and is applied in assessing the classification accuracy of two classifiers, a neural network and a decision tree model on the same data set. The Kappa statistic is shown to be a more discerning statistical tool for assessing the classification accuracy of different classifiers and has the added advantage of being statistically testable against the standard normal distribution. It gives the analyst better interclass discrimination that the overall accuracy measure. The authors recommend that the Kappa statistic be used in preference to the overall accuracy as a means of assessing classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call