Introduction/Background Jeffries1 theoretical framework, developed for healthcare disciplines, suggests that high fidelity simulation (HFS) design characteristics are relevant to simulation learning. Inconsistencies in implementing HFS scenarios occur when different instructors have a different approach to the facilitator and debrief role. Despite a variety of simulation models, it is not known what constitutes best practice. There is little research on how to standardize the organizational and technical implementation of HFS. Inconsistencies in simulation practice may have a negative impact on successful completion of learning objectives and ultimately, student learning. Methods In the fall of 2012 a BSN program implemented a concierge model of simulation. A concierge is a faculty member who has been trained in simulation methodology and whose role is twofold: 1) to facilitate the scenario and 2) to conduct debriefings. Before initiation of the concierge model, HFS experiences were less organized. Nursing students were accompanied to the simulation center with their clinical instructor, who may or may not have been trained to facilitate or debrief the HFS experience. The concierge model with four simulation trained concierges provided much more consistency in the HFS learning experience for nursing faculty, BSN students and simulation technicians. Kirkpatrick and Kirkpatrick’s Model of Training Evaluation2 was selected to guide program evaluation of the new model. This abstract describes Kirkpatrick Level 2 evaluation, focusing on changes in students’ “knowledge, skills, and attitudes.” In the spring of 2013, after IRB approval and student consent, a quasi-experimental study was conducted to evaluate the concierge program. Our evaluation question was: How does the simulation experience, guided by the concierge, change students’ performance of clinical skills in the HFS lab? Performance measures were developed from the simulation learning objectives. The performance measure was reviewed by three nursing faculty who were medical surgical content experts and had a combined 18 years experience in HFS. Four groups of students participated in a 10 minute high-fidelity simulation scenario. Some students participated in the scenario and other students observed the real-time videotape of the scenario in a different room. The scenario was followed by 30-45 minutes of videotape review and debrief by the concierge for all students. After a 10 minute break, the scenario was repeated with students who had previously been the observers. A second debrief was conducted with all students. Results: Conclusion Evaluation faculty reviewed the videotapes and rated students on performance measures for both sessions. No significant difference was found in the 1st and 2nd session’s total summative scores. The students in the 2nd scenario (after debriefing) did not perform significantly better than the students in the 1st session. However, significant differences were found in selected subscores. Students in the 2nd session were quicker to identify the patient. Also, students administered an expectorant more often in the 2nd session. This provides information for improving the performance measure. Performance measures linked to time taken to complete skills may highlight the differences in sessions. While the Results of Level 2 evaluation are non-conclusive, the very act of observing student performance informed the research team on what skills were not transferring from didactic to the simulation. For example, evaluators observed that only some of the students used a systematic process to initiate care. But, these processes can be effective in a simulation lab. Other valuable information gained from this study is that research truly is a team effort by nursing faculty, concierge faculty and simulation technicians. No detail is too small to address. Everyone involved in implementing HFS must be trained right down to scripting the debrief for a successful research process.
Read full abstract