Abstract

A binary classification problem is common in medical field, and we often use sensitivity, specificity, accuracy, negative and positive predictive values as measures of performance of a binary predictor. In computer science, a classifier is usually evaluated with precision (positive predictive value) and recall (sensitivity). As a single summary measure of a classifier’s performance, F1 score, defined as the harmonic mean of precision and recall, is widely used in the context of information retrieval and information extraction evaluation since it possesses favorable characteristics, especially when the prevalence is low. Some statistical methods for inference have been developed for the F1 score in binary classification problems; however, they have not been extended to the problem of multi-class classification. There are three types of F1 scores, and statistical properties of these F1 scores have hardly ever been discussed. We propose methods based on the large sample multivariate central limit theorem for estimating F1 scores with confidence intervals.

Highlights

  • In medical field, a binary classification problem is common, and we often use sensitivity, specificity, accuracy, negative and positive predictive values as measures of performance of a binary predictor

  • A classifier is usually evaluated with precision and recall, which are equal to positive predictive value and sensitivity, respectively

  • Others have used multi-class 1 scores but only reported point estimates without confidence intervals [6,7,8,9,10,11,12,13,14,15,16]. To address this knowledge gap, we provide the methods for computing variances of these multi-class 1 scores so that estimating the micro-averaged 1 score and macro-averaged 1 score with confidence intervals becomes possible in multi-class classification

Read more

Summary

Introduction

Sokolova and Lapalme [3] gave an alternative definition of the macro-averaged 1 score as the harmonic mean of the simple averages of the precision and recall over classes. Micro-averaging puts only a small weight on the first column because the sample size there is relatively small This numeric example shows a shortcoming of summarizing a performance of a multiclass classification with a single number when within-class precision and recall vary substantially. Aggregate measures such as the micro-averaged and macro-averaged scores are useful in quantifying the performance of a classifier as a whole.

Confidence interval for maF1
Findings
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call