Abstract

Graduate medical education programs must ensure residents and fellows acquire skills needed for independent practice. Workplace-based observational assessments are informative but can be time- and resource-intensive. In this study we sought to gather "relations-to-other-variables" validity evidence for scores generated by the Electromyography Direct Observation Tool (EMG-DOT) to inform its use as a measure of electrodiagnostic skill acquisition. Scores on multiple assessments were compiled by trainees during Clinical Neurophysiology and Electromyography rotations at a large US academic medical center. Relationships between workplace-based EMG-DOT scores (n=298) and scores on a prerequisite simulated patient exercise, patient experience surveys (n=199), end-of-rotation evaluations (n=301), and an American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) self-assessment examination were assessed using Pearson correlations. Among 23 trainees, EMG-DOT scores assigned by physician raters correlated positively with end-of-rotation evaluations (r=0.63, P=.001), but EMG-DOT scores assigned by technician raters did not (r=0.10, P=.663). When physician and technician ratings were combined, higher EMG-DOT scores correlated with better patient experience survey scores (r=0.42, P=.047), but not with simulated patient or AANEM self-assessment examination scores. End-of-rotation evaluations can provide valid assessments of trainee performance when completed by individuals with ample opportunities to directly observe trainees. Inclusion of observational assessments by technicians and patients provides a more comprehensive view of trainee performance. Workplace- and classroom-based assessments provide complementary information about trainee performance, reflecting underlying differences in types of skills measured.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call