Abstract
BackgroundWe focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine.MethodsAccordingly, we propose a framework distinguishing the reference labeling (or Gold Standard) from the set of annotations from which it is usually derived (the Diamond Standard). We define a set of quality dimensions and related metrics: representativeness (are the available data representative of its reference population?); reliability (do the raters agree with each other in their ratings?); and accuracy (are the raters’ annotations a true representation?). The metrics for these dimensions are, respectively, the degree of correspondence, Ψ, the degree of weighted concordanceϱ, and the degree of fineness, Φ. We apply and evaluate these metrics in a diagnostic user study involving 13 radiologists.ResultsWe evaluate Ψ against hypothesis-testing techniques, highlighting that our metrics can better evaluate distribution similarity in high-dimensional spaces. We discuss how Ψ could be used to assess the reliability of new predictions or for train-test selection. We report the value of ϱ for our case study and compare it with traditional reliability metrics, highlighting both their theoretical properties and the reasons that they differ. Then, we report the degree of fineness as an estimate of the accuracy of the collected annotations and discuss the relationship between this latter degree and the degree of weighted concordance, which we find to be moderately but significantly correlated. Finally, we discuss the implications of the proposed dimensions and metrics with respect to the context of Explainable Artificial Intelligence (XAI).ConclusionWe propose different dimensions and related metrics to assess the quality of the datasets used to build predictive models and Medical Artificial Intelligence (MAI). We argue that the proposed metrics are feasible for application in real-world settings for the continuous development of trustable and interpretable MAI systems.
Highlights
We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine
As we show in Appendix A this may be due to the inability of Dempster combination rule to properly differentiate between genuine agreement and agreement due to chance: this is the key concept in the measurement of inter-rater reliability; the purpose of this dimension may be described as quantifying the amount of observed agreement that is not due to chance
Despite the incompatibility between the standard interpretation of Dempster-Shafer theory and the quantification of inter-rater reliability, in Appendix A we show how the proposed metrics can be interpreted as arising from the evidence-theoretic framework by relying on non-standard aggregation rules discussed in the literature to avoid some shortcomings of the Dempster rule of aggregation [21]
Summary
We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. Trust in technology is a vast research topic (e.g., [2]), but we can ground our approach on an intuitive notion of it: we trust an advisor (and are willing to rely on his advice) if his reputation is good; if we generally agree with his recommendations (i.e., we find them plausible); if he convinces us that he is right (or persuasiveness); and if we think his sources and knowledge are good (or expertise) These intuitive notions have clear counterparts in the MAI domain: reputation relates to accuracy (on past cases); plausibility to human-machine concordance; persuasiveness relates to explainability, or better yet, to causability [3]; and the advisor’s expertise relates to what one of the founders of ML evocatively called the experience of the ML system [4] (p.2).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.