Abstract

We were quite surprised to see the results of the brief survey on speech recognition (SR) conducted by Peter Marcovici, MD, [ 1 Marcovici P.A. Re: “Frequency and spectrum of errors in final radiology reports generated with automatic speech recognition technology.”. J Am Coll Radiol. 2009; 6: 282-283 Abstract Full Text Full Text PDF PubMed Scopus (4) Google Scholar ] at the University of California, San Diego (UCSD). We felt certain that our own experience at the University of Pittsburgh Medical Center (UPMC) was completely different from the depressing results Dr Marcovici reported. Thus, we conducted the exact same survey on the radiologists in our own department. Our results are indeed strikingly different. Similar to Dr Marcovici, we received 44 responses to our survey (from 13 residents, 6 fellows, and 25 faculty members). Respondents answered each question using a 5-point, Likert-type response scale: 1 = “strongly disagree,” 2 = “disagree,” 3 = “neutral,” 4 = “agree,” and 5 = “strongly agree.” The wording of each question was identical to the wording provided in Dr Marcovici's letter, except that we prefer the term speech recognition instead of voice recognition [ 2 Branstetter BF IV, B.F. Basics of imaging informatics Part 1. Radiology. 2007; 243: 656-667 Crossref PubMed Scopus (37) Google Scholar ]. We also removed the 2 questions regarding macros because we do not use macros at our institution. We have listed the average responses we received, as well as Dr Marcovici's results in parentheses for comparison. 1I am satisfied with our SR software: 3.8 (2.2) 2Our SR software increases my productivity/efficiency at work: 3.9 (1.9) 3Our SR software decreases my productivity/efficiency at work: 2.3 (4.2) 4Using our SR software introduces errors in communication: 2.7 (4.2) 5I have seen errors in prior reports that I attribute to our SR software (not diagnostic errors): 3.4 (4.4) 6I have been notified by a clinician of an error due to our SR software: 2.5 (3.8) 7I have seen an error attributable to our SR software that has significant clinical implications: 2.5 (3.8) 8I have had to addend a report because of an error generated by our SR software: 2.8 (4.1) 9Using our SR software enhances patient care and safety: 3.4 (2.3) 10Our SR software increases the potential for malpractice: 2.6 (3.8) 11Our SR software allows more time for teaching, learning, and attending conferences: 3.3 (2.0) 12When reviewing reports generated by our SR software (before finalizing them), I notice errors that need to be corrected: 4.0 (4.1) 13When compared to alternative systems (i.e., human transcription services), our SR software … ais faster: 3.9 (1.8) bis more accurate: 2.8 (1.8) cleads to more confusing reports: 2.4 (3.6) Response to Letters From Quint and Quint, Janower, and Branstetter and ShresthaJournal of the American College of RadiologyVol. 6Issue 7PreviewI am honored to be engaged in this topic and to reply to the insightful letters from Quint and Quint that published in the April 2009 issue of the journal [1], and to those by Janower [2], Branstetter and Shrestha [3] that are included in the current issue of the journal. Full-Text PDF

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call