Abstract

Classifier performance evaluation is an important step in designing diagnostic systems. The purposes of performing classifier performance evaluation include: 1) to select the best classifiers from the several candidate classifiers, 2) to verify that the classifier designed meets the design requirement, and 3) to identify the need for improvements in the classifier components. In order to effectively evaluate classifier performance, a classifier performance measure needs to be defined that can be used to measure the goodness of the classifiers considered. This paper first argues that in fault diagnostic system design, commonly used performance measures, such as accuracy and ROC analysis are not always appropriate for performance evaluation. The paper then proposes using misclassification cost as a general performance measure that is suitable for binary as well as multi-class classifiers, and -most importantly- for classifiers with unequal cost consequence of the classes. The paper also provides strategies for estimating the cost matrix by taking advantage of fault criticality information obtained from FMECA. By evaluating the performance of different classifiers considered during the design process of an engine fault diagnostic system, this paper demonstrates that misclassification cost is an effective performance measure for evaluating the performance of multi-class classifiers with unequal cost consequence for different classes.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call