Abstract

Background: A battery of various psychometric assessments has been conducted on script concordance tests (SCTs) that are purported to measure data interpretation, an essential component of clinical reasoning. Although the breadth of published SCT research is broad, best practice controversies and evidentiary gaps remain. Purposes: In this study, SCT data were used to test the psychometric properties of 6 scoring methods. In addition, this study explored whether SCT items clustered by difficulty and type were able to discriminate between medical training levels. Methods: SCT scores from a problem-solving SCT (SCT-PS; n = 522) and emergency medicine SCT (SCT-EM; n = 1,040) were collected at a large institution of medicine. Item analyses were performed to optimize each dataset. Items were categorized into difficulty levels and organized into types. Correlational analyses, one-way multivariate analysis of variance (MANOVA), repeated measures analysis of variance (ANOVA), and one-way ANOVA were conducted to explore study aims. Results: All 6 scoring methods differentiated between training levels. Longitudinal analysis of SCT-PS data reported that MS4s significantly (p < .001) outperformed their scores as MS2s in all difficulty categories. Cross-sectional analysis of SCT-EM data reported significant differences (p < .001) between experienced EM physicians, EM residents, and MS4s at each level of difficulty. Items categorized by type were also able to detect training level disparities. Conclusions: Of the 6 scoring methods, 5-point scoring solutions generated more reliable measures of data interpretation than 3-point scoring methods. Data interpretation abilities were a function of experience at every level of item difficulty. Items categorized by type exhibited discriminatory power providing modest evidence toward the construct validity of SCTs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call