Abstract

PurposeIn medical education, peer assessment is considered to be an effective learning strategy. Although several studies have examined agreement between peer and faculty assessments regarding basic life support (BLS), few studies have done so for advanced resuscitation skills (ARS) such as intubation and defibrillation. Therefore, this study aimed to determine the degree of agreement between medical students’ and faculty assessments of ARS examinations.MethodsThis retrospective explorative study was conducted during the emergency medicine (EM) clinical clerkship of fourth-year medical students from April to July 2020. A faculty assessor (FA) and a peer assessor (PA) assessed each examinee’s resuscitation skills (including BLS, intubation, and defibrillation) using a checklist that consisted of 20 binary items (performed or not performed) and 1 global proficiency rating using a 5-point Likert scale. The prior examinee assessed the next examinee after feedback and training as a PA. All 54 students participated in peer assessment. The assessments of 44 FA/PA pairs were analyzed using the intraclass correlation coefficient (ICC) and Gwet’s first-order agreement coefficient.ResultsThe PA scores were higher than the FA scores (mean±standard deviation, 20.2±2.5 [FA] vs. 22.3±2.4 [PA]; P<0.001). The agreement was poor to moderate for the overall checklist (ICC, 0.55; 95% confidence interval [CI], 0.31 to 0.73; P<0.01), BLS (ICC, 0.19; 95% CI, -0.11 to 0.46; P<0.10), intubation (ICC, 0.51; 95% CI, 0.26 to 0.70; P<0.01), and defibrillation (ICC, 0.49; 95% CI, 0.23 to 0.68; P<0.01).ConclusionSenior medical students showed unreliable agreement in ARS assessments compared to faculty assessments. If a peer assessment is planned in skills education, comprehensive preparation and sufficient assessor training should be provided in advance.

Highlights

  • intraclass correlation coefficient (ICC) estimates and their 95% confidence intervals (CIs) were calculated using IBM SPSS ver. 25.0 (IBM Corp., Armonk, NY, USA) based on a 1-way random effects model because each subject was rated by a different set of raters (44 different faculty assessor (FA)-peer assessor (PA) pairs)

  • We analyzed the scores of 44 FA-PA pairs, excluding 10 students who participated in the peer assessment by video

  • The PA scores were higher than the FA scores (FA versus PAs, mean ± standard deviation; 20.2 ± 2.5 versus 22.3 ± 2.4, P < 0.001)

Read more

Summary

Introduction

Background/rationale Traditionally, peer assessment in medical training has been used. Cal students provided high-quality education that was as effective as professional training [4]. In studies of peer assessment in BLS training, senior medical students were able to make reliable assessments of their peers’ performance [5]. The interrater reliability between professional faculty assessors (FAs) and PAs has likewise been shown to be good [6]. Quite a few studies of peer assessment in skills training were limited to BLS. Few studies have been conducted on peer assessment of advanced resuscitation skills, such as tracheal intubation and manual defibrillation, among medical students. To introduce peer assessment for educational purposes, its reliability and validity should be established, but studies have not provided sufficient psychometric data [7]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call