Abstract

Automatic speech recognition technology has a high frequency of transcription errors, necessitating careful proofreading and report editing. The purpose of this study was to determine the frequency and spectrum of significant dictation errors in finalized radiology reports generated with speech recognition technology. All 265 radiology reports that were reviewed in preparation for 12 consecutive weekly multidisciplinary thoracic oncology group conferences were examined for significant dictation errors; reports were compared with the corresponding imaging studies. In addition, departmental radiologists were surveyed regarding their estimates of overall and individual report error rates. Two hundred six of 265 (78%) reports contained no significant errors, and 59 (22%) contained errors. Report error rates by individual radiologists ranged from 0% to 100%. There were no significant differences in error rates between native and nonnative English speakers (P > .8) or between reports dictated by faculty members alone and those dictated by trainees and signed by faculty members (P > .3). The most frequent types of errors were wrong-word substitution, nonsense phrases, and missing words. Fifty-five of 88 radiologists (63%) believed that overall error rates did not exceed 10%, and 67 of 88 radiologists (76%) believed that their own individual error rates did not exceed 10%. More than 20% of our reports contained potentially confusing errors, and most radiologists believed that report error rates were much lower than they actually were. Knowledge of the frequency and spectrum of errors should raise awareness of this issue and facilitate methods for report improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call