Abstract

BACKGROUND: Resuscitation of critically ill patients requires medical knowledge, clinical skills and non-medical skills, referred to as crisis resource management (CRM) skills. Most human errors that occur in medical crises are attributed not to deficits in medical knowledge, but rather to errors in non-medical skills such as CRM. Few opportunities currently exist to formally evaluate performance during resuscitation. Furthermore, no gold standard exists for evaluation of CRM performance. A pilot study using mannequin-based human patient simulation examined CRM performance during simulated emergencies, and examined the validity of a novel rating instrument in evaluating CRM performance - the Ottawa Crisis Resource Management Global Rating Scale (abbreviated as “Ottawa GRS”). Debate exists as to whether checklists or global rating scales are superior in formal evaluation of performance. In this study, a checklist for CRM performance and the Ottawa GRS were used to evaluate CRM performance. Both instruments were peer-reviewed, and divided into five categories of CRM skills. The Ottawa GRS provided scores for each category along a seven-point Likert scale, and also included a separate score for overall performance. The CRM checklist provided measurements of 15 individual items for the five categories of CRM, with a cumulative score of 30 points (two points per item). METHODS: First and third-year residents participated in two simulator scenarios, each recreating emergencies commonly observed in acute care settings. Using edited video recordings of each session, three blinded raters evaluated resident performance with both the Ottawa GRS and the CRM checklist. Validity of each rating instrument was measured on the basis of content validity, response process, internal structure and response to other variables. Response to variables was measured in this study by the response to the variable of level of training. T-test analysis of Ottawa GRS and CRM checklist scores were conducted to examine response to level of training. Internal structure was measured in part by measures of inter-rater reliability. Intra-class correlation coefficient (ICC) scores were used to measure inter-rater reliability for both scenarios. RESULTS: A total of 32 first-year and 28 third-year residents were recruited into the study during a 24-month period. Both the Ottawa GRS and the CRM checklist demonstrated the ability to discriminate between levels of training in all categories examined (p <0.0019 to p <0.0001). This difference was noted with all raters, and with each scenario. No statistically significant difference in resident performance was noted between the first and second scenario. Intra-class correlation coefficient (ICC) scores for Ottawa GRS overall performance scores were 0.590 and 0.613 for each scenario. ICC scores for CRM checklist cumulative scores were 0.633 and 0.545 for each scenario. All raters in the study indicated a strong preference for the Ottawa GRS due to ease of administration and scoring. Raters also indicated the Ottawa GRS was superior in providing the opportunity to rate overall performance. CONCLUSIONS: Both the Ottawa GRS and CRM checklist demonstrated the ability to differentiate CRM performance based on levels of training. Both instruments demonstrated acceptable measures of inter-rater reliability. The Ottawa GRS was strongly preferred by raters for its ease of use and administration. The Ottawa GRS appears well-suited as a formal evaluation tool to rate physician performance in CRM during simulated emergencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call