Abstract
In ranking applications, AUCROC is widely used in measuring the performance of a discriminative model. But this is possible only if the labels are binary, like in {0,1}, as AUCROC is undefined for non-binary labels. In modelling applications where the labels are not from {0,1} but are probabilities of membership (p,1−p) to the binary classes, we generally use other metrics. In this paper, we propose a metric ARatio which can be used on binary as well as probabilistic labels. We prove that it is exactly equal to AUCROC for binary labels. We also prove that it extends the same semantics as AUCROC for probabilistic labels. We extend the confusion matrix for probabilistic labels and redefine metrics like precision, recall and F1-score. We define AccRatioand show that it is equivalent to area under the precision–recall curve for non-binary probabilistic labels.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have