Abstract

To document two sources of validity evidence for simulation-based assessment in neurological emergencies. A critical aspect of education is development of evaluation techniques that assess learner's performance in settings that reflect actual clinical practice. Simulation-based evaluation affords the opportunity to standardize evaluations but requires validation. We identified topics from the Neurocritical Care Society's Emergency Neurological Life Support (ENLS) training, cross-referenced with the American Academy of Neurology's core clerkship curriculum. We used a modified Delphi method to develop simulations for assessment in neurocritical care. We constructed checklists of action items and communication skills, merging ENLS checklists with relevant clinical guidelines. We also utilized global rating scales, rated one (novice) through five (expert) for each case. Participants included neurology sub-interns, neurology residents, neurosurgery interns, non-neurology critical care fellows, neurocritical care fellows, and neurology attending physicians. Ten evaluative simulation cases were developed. To date, 64 participants have taken part in 274 evaluative simulation scenarios. The participants were very satisfied with the cases (Likert scale 1-7, not at all satisfied-very satisfied, median 7, interquartile range (IQR) 7-7), found them to be very realistic (Likert scale 1-7, not at all realistic-very realistic, median 6, IQR 6-7), and appropriately difficult (Likert scale 1-7, much too easy-much too difficult, median 4, IQR 4-5). Interrater reliability was acceptable for both checklist action items (kappa = 0.64) and global rating scales (Pearson correlation r = .70). We demonstrated two sources of validity in ten simulation cases for assessment in neurological emergencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call