Abstract

Dear Editor,The Athletic Training Education Journal recently published an article entitled “High-Fidelity Meets Athletic Training Education: An Innovative Collaborative Teaching Project.”1 Although we appreciate the authors' experiences with how they are using high-fidelity simulation, we are concerned with the interpretation and accuracy of this information. We feel that these misunderstandings contradict the research that informs us as educators. Despite misgivings, we appreciate the authors' use of simulation to expose students to patient encounters that are rarely seen during clinical education.We would like to clarify a misunderstanding of the rationale/uses between a simulation and a standardized patient encounter. Walker et al2 described a simulation as a scenario or clinical situation in which a student evaluates a mock patient/athlete who portrays a fake injury or condition. Simulations can be standardized for a group of learners or created on an as-needed basis for an individual learner. A simulation can range in the level of believability (low/high-fidelity) and can include a standardized patient and/or technology such as a partial task trainer (blood pressure), replica model (rectal temperature), or high-fidelity simulator such as iStan (CAE Healthcare, Sarasota, FL). In a standardized patient encounter, an individual is trained to consistently portray a patient with a particular injury or illness.2 We would like to clarify that the training for a standardized patient is intentional and deliberate. This training does not occur as the authors report (“often”) but is mandatory and critical to the authenticity of the standardized patient experience for the learner.3 In medical schools, standardized patient training is rarely if ever performed by a preceptor (physician), but rather is usually provided by a standardized patient trainer or other staff member. In athletic training, a faculty member would typically train the standardized patient.The authors state the following: “High-fidelity simulation can provide many of the same benefits as standardized patient encounters, while eliminating the time and monetary costs associated with training mock patients.” This quite-general statement is not supported by the literature and does not take into account the many different high-fidelity simulators and their cost. Moreover, standardized patient encounters use standardized patients, not “mock patients.” We would like to address 3 different parts of this statement: benefits, time, and cost.Standardized patient encounters and simulations have a place within athletic training education and provide various benefits in the teaching and evaluating of learners. For example, a standardized patient could not mimic the vital signs needed for a sudden cardiac event, but a high-fidelity simulator could thereby provide an authentic advanced cardiac life support experience for students. On the other hand, a high-fidelity simulator cannot cry or show the emotion of a standardized patient, so a standardized patient can better teach students empathy and communication. We feel it is dangerous to generalize and promote one method over the other. Both have a significant place in education.Both standardized patient encounters and simulations take a considerable amount of time to create and implement. The creation of either a simulation or a standardized patient case entails developing a scenario, determining patient characteristics and social history, defining student learning objectives, etc. The time it takes to create these cases depends on the complexity of the simulation or standardized patient encounter as well as the learning objectives for the encounter. Is the student assessing a basic ankle sprain or a concussion with loss of consciousness? Is the student evaluating only, or is he synthesizing information gathered during the evaluation to plan a course of treatment?Both simulations and standardized patient encounters have associated monetary costs. For example, if an educator wanted to ensure that all students provided advanced cardiac life support to a patient before graduation, then ideally a high-fidelity simulator, often costing $30 000 to $150 000 (not including a yearly maintenance contract), would be needed. Furthermore, adequate and secure facilities are required to use and store this expensive simulation technology. The costs per hour for a standardized patient range from minimum wage up to $30 an hour. It would take many standardized patient encounters to equal the cost of the high-fidelity simulator. However, the standardized patient alone could not provide an authentic advanced cardiac life support experience for the student.The authors state that “standardized patient simulation” (we are unsure whether a simulation or a standardized patient encounter is being referred to in this statement) is “often an ineffective and unrealistic method of evaluating clinical skills.” This statement is not supported in the literature. We are unaware of any evidence that would support such a general statement regarding the use of either standardized patients or simulations. Standardized patients are used on a large-scale basis to train, evaluate, and license healthcare professionals4 as well as during qualifying examinations for educational advancement.5 In addition to standardized patients, simulations are used to educate and evaluate physicians, nurses, and a variety of other healthcare professionals.6The authors continue, “Trainees often have a hard time connecting these simulations [standardized patient simulations] to real-life clinical experiences.” To our knowledge, this general statement specific to simulations is not supported by evidence. McGaghie et al7 recently published a qualitative synthesis of 23 articles, 7 years of translational research in simulation-based mastery learning. They found a variety of clinical skills were mastered through simulation, such as cardiac auscultation and advanced cardiovascular life support, and were translated into improved patient practices and patient outcomes.7 It would go beyond the purpose of this letter to elaborate further, but there is credible research that demonstrates that knowledge transfer does occur.We would like to correct some inaccuracies. The authors stated, “Of the ATPs [athletic training programs] surveyed by Armstrong et al, 78.4% see this [infrequent and unpredictable occurrence of an injury] as a barrier.” However, it was Walker et al2 who found that 78.4% found inadequate volume of injuries or conditions to be a barrier to real-time evaluation, not “infrequent and unpredictable occurrence of an injury.” This was cited incorrectly.The article also stated, “Additionally, 24.6% of ATPs cite a shortage of support for clinical experiences by instructors as a major barrier.” This is not accurate. Walker et al2 found that 24.6% of the respondents agreed or strongly agreed that a coach or administrator who provided minimal support for clinical education was a barrier to real-time evaluation. The authors seemed to have also misconstrued the context of these findings from the original article.The Walker et al2 and Armstrong et al8 manuscripts were cited as reference 2 throughout the manuscript, but they are in fact references 3 and 1 respectively, in the authors' reference list. Walker et al2 surveyed athletic training program directors from all accredited athletic training programs with regard to the various methods of clinical proficiency evaluation. The participants for that study were program directors, not “National Athletic Trainers' Association public and private institutions,” as reported in the current manuscript. Armstrong et al8 surveyed preceptors (then called approved clinical instructors) regarding methods of clinical proficiency evaluation in athletic training education programs (not program directors) only in NATA District 4. References 4 and 5 are cited regarding the percentage of clinical integration proficiencies completed via simulation, but these references are informational manuscripts on how to use standardized patients9 and how to develop the case a standardized patient will portray.10 We believe these references were intended for Walker et al2 and/or Armstrong et al.8We again want to express appreciation to the authors for their efforts, and we were excited to see this educational technique article on high-fidelity simulation. It provides educators with a blueprint for implementing such educational experiences for their students. We thank the Athletic Training Education Journal for this opportunity to communicate our thoughts and concerns. We are hopeful that this dialogue will foster further interest regarding the use of simulations and standardized patients in athletic training education.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call