Abstract

Scoring protocols for most standardized-patient (SP) examinations have not received extensive scrutiny and their validity has not been well established. A holistic method (i.e., one based on raters' overall impressions) of scoring performance on an SP examination was pilot-tested in the spring of 1992 by administering an examination to two cohorts of fourth-year students at the Albert Einstein College of Medicine at Yeshiva University. The examination consisted of eight SP stations, representing a range of medical problems. Two to three experienced clinical teachers independently reviewed all the written material for each encounter. In Phase I of the study, holistic ratings of outstanding, competent, marginal, or inadequate were given for overall clinical competence for a cohort of 16 students; in Phase II, holistic ratings were given separately for data-gathering and communication skills for a cohort of 26 students. Intercase and interrater reliability analyses were performed. Adequate reliability coefficients were obtained on a two-hour test; total scores (i.e., students' scores across all eight cases) discriminated between groups of examinees; and, on average, less than two minutes were required to score an encounter. Although based on a small sample, the study's results suggest that this holistic method of scoring performance may be useful in some situations. Since experienced clinical teachers know and agree about clinical competence when they see it, developers of scoring protocols for SP examinations need to establish that the results obtained are congruent with the judgments of expert teachers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call