Abstract
We would like to thank Drs Norman and Cook for their comments on our recent article in CHEST (February 2011).1Fraser K Wright B Girard L et al.Simulation training improves diagnostic performance on a real patient with similar clinical findings.Chest. 2011; 139: 376-381Abstract Full Text Full Text PDF PubMed Scopus (38) Google Scholar We agree with several of their points, including the fact that training in one clinical skill (recognition of aortic stenosis) does not generalize (to improved performance at recognizing mitral regurgitation), as we stated in the “Limitations” section of our study. This finding is another demonstration of the phenomenon of content specificity, rather than a reflection on our study.2Elstein AS Shulman LS Sprafka SA Medical Problem Solving: an Analysis of Clinical Reasoning. Harvard University Press, London, Cambridge, MA1978Crossref Google Scholar We also concur with their cautionary note that using high-fidelity simulators is not the only way to create a virtual learning experience to improve clinical skills. It was never our intention to compare interventions in this study. Typically, single studies are designed to answer one research question, and no one study can answer all questions. We do, however, disagree with two of the views by Drs Norman and Cook. The first is that “students who heard the simulated mitral regurgitation would diagnose mitral regurgitation the next time they heard any murmur.” If this were the case, then we should expect a similar finding for students trained in aortic stenosis. Yet we did not find this; in fact, most students who heard simulated aortic stenosis diagnosed (correctly) mitral regurgitation on a real patient. We also disagree with their inference that there exist well-designed studies demonstrating that simulator training can improve performance on real patients. To support their opinion, they quote two studies, which, by implication, they consider to be well designed. These studies compared two interventions (ie, phonocardiosimulator vs real patients in the study by Aberg et al,3Aberg H Johansson R Michaëlsson M Phonocardiosimulator as an aid in teaching auscultation of the heart.Br J Med Educ. 1974; 8: 262-266Crossref PubMed Scopus (5) Google Scholar and compact disc vs human patient simulator in the study by de Giovanni et al4de Giovanni D Roberts T Norman G Relative effectiveness of high-versus low-fidelity simulation in learning heart sounds.Med Educ. 2009; 43: 661-668Crossref PubMed Scopus (82) Google Scholar). Both studies had a parallel group design that did not include a control group or a preintervention evaluation. Also, the comparison interventions in both studies were part of a curriculum that also included didactic teaching. Both groups found no difference between interventions in the postintervention performance on real patients. However, because of the study design, it is not possible to establish whether both interventions are equally effective (ie, if transfer occurred equally in both groups) or equally ineffective (if transfer did not occur in either group), and whether the learning gains, if these occurred, were due to the didactic teaching or the target interventions. Although our study has its own limitations, our design did include a control group that received the same amount of simulator training (but no exposure to a cardiac murmur), and our intervention comprised the simulator learning experience only. Thus, we were able to demonstrate that transfer of learning did occur, and that this was associated with exposure to the target murmur on a simulator (mitral regurgitation). Currently, our understanding of how best to use simulation in medical education is inchoate, and there are many research questions still to be answered. Drs Norman and Cook have hinted at some of these, including the following: Is one type of simulation better than another? And, what is the cost effectiveness of different types of simulated learning experiences? We would add to this: What types of outcomes can be improved through simulated learning experiences? And, are combinations of simulated learning experiences better than single experiences?5Brydges R Carnahan H Rose D Rose L Dubrowski A Coordinating progressive levels of simulation fidelity to maximize educational benefit.Acad Med. 2010; 85: 806-812Crossref PubMed Scopus (107) Google Scholar To tackle these, and other important questions, we will need a variety of study designs, all of which will have their own strengths and limitations. Simulator Training for Recognition of Murmurs: Give a Kid a Hammer?CHESTVol. 139Issue 5PreviewIn their recent article in CHEST (February 2011), Fraser et al1 demonstrated that students who have just listened to a mitral regurgitation on a high-cost, high-fidelity patient manikin simulator (“Harvey”) can recognize mitral regurgitation in a real patient more accurately than can students who heard other abnormal heart sounds on Harvey during training. We wish to issue a cautionary note. The evidence presented does not constitute an endorsement of simulator training, except as compared with no training. Full-Text PDF
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.