Abstract
Introduction In our program, simulated patients (SPs) give feedback to medical students in the course of communication skills training. To ensure effective training, quality control of the SPs’ feedback should be implemented. At other institutions, medical students evaluate the SPs’ feedback for quality control (Bouter et al., 2012). Thinking about implementing quality control for SPs’ feedback in our program, we wondered whether the evaluation by students would result in the same scores as evaluation by experts. Methods Consultations simulated by 4th-year medical students with SPs were video taped including the SP’s feedback to the students (n=85). At the end of the training sessions students rated the SPs’ performance using a rating instrument called Bernese Assessment for Role-play and Feedback (BARF) containing 11 items concerning feedback quality. Additionally the videos were evaluated by 3 trained experts using the BARF. Results The experts showed a high interrater agreement when rating identical feedbacks (ICCunjust=0.953). Comparing the rating of students and experts, high agreement was found with regard to the following items: 1. The SP invited the student to reflect on the consultation first, Amin (= minimal agreement) 97% 2. The SP asked the student what he/she liked about the consultation, Amin = 88%. 3. The SP started with positive feedback, Amin = 91%. 4. The SP was comparing the student with other students, Amin = 92%. In contrast the following items showed differences between the rating of experts and students: 1. The SP used precise situations for feedback, Amax (=maximal agreement) 55%, Students rated 67 of SPs’ feedbacks to be perfect with regard to this item (highest rating on a 5 point Likert scale), while only 29 feedbacks were rated this way by the experts. 2. The SP gave precise suggestions for improvement, Amax 75%, 62 of SPs’ feedbacks obtained the highest rating from students, while only 44 of SPs’ feedbacks achieved the highest rating in the view of the experts. 3. The SP speaks about his/her role in the third person, Amax 60%. Students rated 77 feedbacks with the highest score, while experts judged only 43 feedbacks this way. Conclusion Although evaluation by the students was in agreement with that of experts concerning some items, students rated the SPs’ feedback more often with the optimal score than experts did. Moreover it seems difficult for students to notice when SPs talk about the role in the first instead of the third person. Since precision and talking about the role in the third person are important quality criteria of feedback, this result should be taken into account when thinking about students’ evaluation of SPs’ feedback for quality control. Bouter, S., E. van Weel-Baumgarten, and S. Bolhuis. 2012. Construction and Validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients’ Ability to Role-Play and Provide Feedback to Students. Academic Medicine: Journal of the Association of American Medical Colleges
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.