Abstract

Background: Methods to quantitatively measure performance during resuscitative care are lacking in published literature. Members of our group have previously published psychometric analyses of task-based scoring instruments used in educational research in pediatric resuscitation. These published investigations used instruments that were designed for specific cases in pediatric resuscitation, rather than for a more generalizable application. We hypothesize that a novel scoring instrument will reliably assess clinical performance during simulated cardiac arrest. Methods: This study was conducted at 11 pediatric centers in Canada and the US. Teams of pediatric providers performed a simulated cardiac arrest scenario (asystole for 6 minutes, VF for 6 minutes). A task-based scoring instrument was designed by investigator consensus using a 0, 1, or 2 point scoring system to rate performance during cardiac arrest. The items were chosen according to the essential steps in the pulseless arrest algorithm of the AHA Pediatric Advanced Life Support course and include CPR performance parameters (chest compression rate, depth, release, pauses), defibrillation (dose in J/kg, timing), and epinephrine (dose, timing). Multiple raters reviewed and scored a set of simulations. Overall interrater reliability was measured; a fully-crossed generalizability study with team and rater as facets was performed to determine the variance in scores ascribable to each facet; a decision study was done to determine the effect of additional raters and scenarios on the G coefficient. Results: Three raters scored four videos. Overall scores ranged from 53/90 (59%) to 73/90 (81%) possible points. Intraclass correlation coefficient was 0.77 (F 3,8 = 4.46, p = 0.04). Variance components were 21% for rater, 57% for scenario. G coefficient was 0.80; by D study this increased to 0.91 and 0.93 with 8 and 10 raters, respectively. Conclusions: A novel scoring instrument for quantifying performance during pediatric cardiac arrest showed modest reliability and generalizability. Future studies should examine the effect of a larger number of raters and/or scenarios on generalizability, as well as the utility of the instrument in assessing real clinical performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.