Abstract

This article describes implementation of the Creighton Simulation Evaluation Instrument to evaluate student performance during a simulated home visit experience. A total of 48 groups of students participating in the simulation were evaluated by peer evaluators and faculty. Interrater reliability was found to be low to fair. Low agreement between raters may be a result of a number of factors, including enhanced faculty familiarity with the instrument and being able to identify evidence of critical thinking being displayed by the students engaged in the simulation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call