Abstract

In this paper, we present and discuss a novel reliability metric to quantify the extent a ground truth, generated in multi-rater settings, as a reliable basis for the training and validation of machine learning predictive models. To define this metric, three dimensions are taken into account: agreement (that is, how much a group of raters mutually agree on a single case); confidence (that is, how much a rater is certain of each rating expressed); and competence (that is, how accurate a rater is). Therefore, this metric produces a reliability score weighted for the raters’ confidence and competence, but it only requires the former information to be actually collected, as the latter can be obtained by the ratings themselves, if no further information is available. We found that our proposal was both more conservative and robust to known paradoxes than other existing agreement measures, by virtue of a more articulated notion of the agreement due to chance, which was based on an empirical estimation of the reliability of the single raters involved. We discuss the above metric within a realistic annotation task that involved 13 expert radiologists in labeling the MRNet dataset. We also provide a nomogram by which to assess the actual accuracy of a classification model, given the reliability of its ground truth. In this respect, we also make the point that theoretical estimates of model performance are consistently overestimated if ground truth reliability is not properly taken into account.

Highlights

  • The research purpose of this paper is to shed light on the concept of the reliability of the decision supports that are developed by means of supervised techniques of Machine Learning (ML).In particular, our approach focuses on the reliability of the ground truth that is generated in multi-rater settings and used to train and validate such ML models

  • We focus on the concept of the reliability of decision supports that are developed by means of supervised techniques of Machine Learning (ML)

  • This nomogram can be used for any value observed along these three dimensions, but as an example, we show the losses in accuracy associated with the minimum reliability threshold mentioned above (i.e., 0.67) for those models for which developers boast an accuracy of 95%, 90%, and 85%: the deviation for all these cases is approximately 6%, which is a margin that is much greater than what is usually tolerated to choose the best model after a cross-validation session

Read more

Summary

Introduction

Our approach focuses on the reliability of the ground truth that is generated in multi-rater settings and used to train and validate such ML models. It has recently been highlighted that the data quality issues affecting the ground truth may severely impact the reliability of the ML predictive models that are trained and validated on them [4,5,6], likely making each estimate of their reliability optimistic. 95% accurate ground truth, as a function of the average accuracy of the raters involved (on the x-axis), if known. These estimates are obtained analytically and have general application

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call