Abstract

Aim This article is a report of a study which developed and tested the validity and reliability of the RAPIDS-Tool to measure student nurses’ simulation performance in assessing, managing and reporting of clinical deterioration. Background The importance for nurses to recognize and respond to deteriorating patients has led educators to advocate for increasing use of simulation for developing this competency. However, there is a lack of evaluation tools to objectively evaluate nurses’ simulation performance on clinical deterioration. Method The study was conducted in three phases. Phase 1 began with development of items for the RAPIDS-Tool from the basis of a literature review and a panel of national experts’ consensus. Phase 2 established the content validity of the RAPIDS-Tool by a panel of international experts and by undertaking a pilot test. Phase 3 involved testing the psychometric properties of the RAPIDS-Tool, on 30 video-recorded simulation performances, for construct validity, inter-rater reliability, and correlation between two scoring systems. Results The process of development and validation produced a 42-item RAPIDS-Tool. Significant differences ( t = 15.48, p < 0.001) in performance scores among participants with different levels of training supported the construct validity. The RAPIDS-Tool demonstrated a high inter-rater reliability (ICC = 0.99) among the three raters and a high correlation between the global rating and checklist scores ( r = 0.94, p < 0.001). Conclusion The RAPIDS-Tool provides a valid and reliable tool to evaluate nurses’ simulation performances in clinical deterioration. This may prove useful for future studies that investigate outcomes of simulation training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call