Abstract

Approaches that use a simulated patient case to study and assess diagnostic reasoning usually use the correct diagnosis of the case as a measure of success and as an anchor for other measures. Commonly, the correctness of a diagnosis is determined by the judgment of one or more experts. In this study, the consistency of experts' judgments of the correctness of a diagnosis, and the structure of knowledge supporting their judgments, were explored using a card sorting task. Seven expert pediatricians were asked to sort into piles the diagnoses proposed by 119 individuals who had worked through a simulated patient case of Haemophilus influenzae Type B (HIB) meningitis. The 119 individuals had varying experience levels. The expert pediatricians were asked to sort the proposed diagnoses by similarity of content, and then to order the piles based on correctness, relative to the known correct diagnosis (HIB meningitis). Finally, the experts were asked to judge which piles contained correct or incorrect diagnoses. We found that, contrary to previous studies, experts shared a common conceptual framework of the diagnostic domain being considered and were consistent in how they categorized the diagnoses. However, similar to previous studies, the experts differed greatly in their judgment of which diagnoses were correct. This study has important implications for understanding expert knowledge, for scoring performance on simulated or real patient cases, for providing feedback to learners in the clinical setting, and for establishing criteria that define what is correct in studies of diagnostic error and diagnostic reasoning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call