Abstract

For the representative problem of prostate cancer grading, we sought to simultaneously model both the continuous nature of the case spectrum and the decision thresholds of individual pathologists, allowing quantitative comparison of how they handle cases at the borderline between diagnostic categories. Experts and pathology residents each rated a standardized set of prostate cancer histopathological images on the International Society of Urological Pathologists (ISUP) scale used in clinical practice. They diagnosed 50 histologic cases with a range of malignancy, including intermediate cases in which clear distinction was difficult. We report a statistical model showing the degree to which each individual participant can separate the cases along the latent decision spectrum. The slides were rated by 36 physicians in total: 23 ISUP pathologists and 13 residents. As anticipated, the cases showed a full continuous range of diagnostic severity. Cases ranged along a logit scale consistent with the consensus rating (Consensus ISUP 1: mean -0.93 [95% confidence interval {CI} -1.10 to -0.78], ISUP 2: -0.19 logits [-0.27 to -0.12]; ISUP 3: 0.56 logits [0.06-1.06]; ISUP 4 1.24 logits [1.10-1.38]; ISUP 5: 1.92 [1.80-2.04]). The best raters were able to meaningfully discriminate between all 5 ISUP categories, showing intercategory thresholds that were quantifiably precise and meaningful. We present a method that allows simultaneous quantification of both the confusability of a particular case and the skill with which raters can distinguish the cases. The technique generalizes beyond the current example to other clinical situations in which a diagnostician must impose an ordinal rating on a biological spectrum. Question: How can we quantify skill in visual diagnosis for cases that sit at the border between 2 ordinal categories-cases that are inherently difficult to diagnose?Findings: In this analysis of pathologists and residents rating prostate biopsy specimens, decision-aligned response models are calculated that show how pathologists would be likely to classify any given case on the diagnostic spectrum. Decision thresholds are shown to vary in their location and precision.Significance: Improving on traditional measures such as kappa and receiver-operating characteristic curves, this specialization of item response models allows better individual feedback to both trainees and pathologists, including better quantification of acceptable decision variation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call