Abstract

Many metrics exist for the evaluation of binary classifiers, all with their particular advantages and shortcomings. Recently, an “Efficiency Index” (EI) for the evaluation of classifiers has been proposed, based on the consistency (or matching) and contradiction (or mismatching) of outcomes. This metric and its confidence intervals are easy to calculate from the base data in a 2 × 2 contingency table, and their values can be qualitatively and semi-quantitatively categorised. For medical tests, in which context the Efficiency Index was originally proposed, it facilitates the communication of risk (of the correct diagnosis versus misdiagnosis) to both clinicians and patients. Variants of the Efficiency Index (balanced, unbiased) which take into account disease prevalence and test cut-offs have also been described. The objectives of the current paper were firstly to extend the EI construct to other formulations (balanced level, quality), and secondly to explore the utility of the EI and all four of its variants when applied to the dataset of a large prospective test accuracy study of a cognitive screening instrument. This showed that the balanced level, quality, and unbiased formulations of the EI are more stringent measures.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.