Abstract

We appreciate the opportunity to respond to the questions raised by Bowdle et al., Ibinson et al., and Chan et al. about our manuscript regarding perioperative medication errors (MEs) and adverse drug events (ADEs).1 Our goal was to assess the rates of perioperative MEs and ADEs as percentages of medication administrations, to evaluate their root causes, and to suggest targeted solutions that may have potential to prevent them. We used an observational methodology, combined with a retrospective chart review and subspecialist consultation by an independent adjudication committee, to provide additional clinical context for confirming and classifying the MEs and ADEs. We found that 5.3% of medication administrations resulted in an ME and/or an ADE, and we classified each of these by whether they involved an ME with ADE (preventable ADE), ME with potential ADE (near miss), ME with little potential for harm, or ADE without ME (nonpreventable ADE).Direct observation appears to be the most accurate method of detecting MEs.2,3 Thus, it is not surprising that these ME and ADE rates are higher than those described in previous studies using self-report as the primary detection strategy in anesthesia.4–8 Notably, the rates reported by us are consistent, and in many cases, they are on the low end, or lower than, those reported using direct observation and similar ME definitions in the perioperative setting (ME rates, 9 to 11%)9 and those reported using our validated and widely used definition of ME with either chart review or direct observational methods in other settings, such as the medical emergency/code setting (ME rates, 4.4 to 50%),10,11 critical care setting (ME rates, 9 to 20%),12,13 inpatient setting (ME rates, 5 to 19%),14–17 outpatient setting (ME rates, 7 to 12%),18–20 and simulation setting (ME rates, 0.5 to 26.5%).16,21While many perioperative ADEs, such as significant hypotension, are corrected during the perioperative period, there is evidence that they can have lasting effects postoperatively.22 A key step to improving the already outstanding safety record of anesthesia is to prevent ADEs and potential ADEs (near misses) from occurring in the first place and, when they do occur, ensure that we recognize and correct them.Many of the authors’ comments center on questions about the methodology we used to identify MEs and ADEs. As described in our manuscript, we iteratively revised an ME detection framework that has been validated in the critical care setting23 to make it more relevant to the perioperative setting, using a combination of literature review, expert and subspecialist consultation. We deliberately widened the acceptable range of practice so as not to overcall errors. For example, we widened published, validated, and accepted dosing ranges for medications by 50 to 100% before flagging an event as a possible medication dosing error. Our observers were trained and practicing anesthesiologists and one nurse anesthetist who all received extensive additional training on observational methodology and MEs, including a detailed ME detection handbook, multiple didactic sessions, and case studies of MEs. They each conducted observations with an experienced observer for at least 10 operations to ensure that they were capturing consistent information. The observers’ task was to flag possible MEs and/or ADEs based on our error detection framework. These possible MEs and/or ADEs were each reviewed by at least two independent members of our adjudication committee, which consisted of clinical and ME experts. The adjudication committee’s task was to exclude events that were not actual MEs or ADEs and to categorize the events by type, preventability, potential for harm, and severity of harm. To gain the necessary clinical context, they reviewed the observer notes, clarified events directly with the observers, reviewed the patient chart, and consulted with experts where necessary. If a possible error or ADE passed this stage, it was included in our study. Interrater reliability between our adjudication committee members was excellent (κ = 0.97 for event classification, κ = 0.98 for preventability, and κ = 0.85 for severity).Bowdle et al. also raised questions about our definition of ME. While there are multiple interpretations of what constitutes an ME in anesthesia, in our manuscript we use a definition of ME that is often used in medication safety research: “failure to complete a required action in the medication administration process, or the use of an incorrect plan or action to achieve a patient care aim.”1,23 The administration of medication is a small part of the overall medication administration process, which includes requesting, dispensing, preparing, administering, documenting, and (where applicable) monitoring during medication administration, as described in our manuscript.1 Examples of when monitoring would be required during medication administration in our study include blood pressure monitoring before an induction dose of propofol and glucose monitoring after insulin is given. Thus, our study was not one of medication administration errors only, but of all MEs that occur along the medication administration process. The National Coordinating Council on ME Reporting and Prevention similarly defines MEs as occurring anywhere in the medication process,24 and error studies in all other settings, including critical care,12,13,23 cardiopulmonary arrest and code situations,10,11 inpatient wards,14,16,25,26 and outpatient clinics18–20 have included MEs along the entire medication process, including errors of omission.The authors specifically questioned whether mean arterial pressure (MAP) less than 55 mmHg that goes untreated for a prolonged interval is actually an ME, or simply a management decision. While our profession does not have clear definitions for intraoperative hypotension, some evidence shows that MAP less than 55 mmHg even for short durations (1 to 5 min) intraoperatively is associated with acute kidney injury (1.18 adjusted odds ratio) and myocardial injury (1.3 adjusted odds ratio) after noncardiac surgery, and this risk escalates rapidly with longer durations of MAP less than 55 mmHg.22 While further research should be done to look at treatment guidelines for hypotension, for this study we lengthened the time period to more than 6 min of untreated MAP less than 55 mmHg for the event to be flagged by our observers as a possible ME (delay in treatment) for further review by the independent adjudication committee. Through review of observer findings, patient chart review, and expert subspecialist consultation as described above, the adjudication committee ruled out cases where persistent MAP less than 55 mmHg was part of intended, appropriate management. The authors also asked why leaving unattended syringes of narcotic was considered an ME; there were only two incidents of this in the study. While it can be debated whether or not to count such issues, per our study definition, they were included as MEs related to medication preparation and classified in our lowest category of severity (“significant”) because they can lead to a variety of unintended consequences including drug diversion, which is associated with increased patient risk for blood-born viruses and undertreated pain.27–30The authors also questioned how we determined whether a specific event represented an ADE. We used a standard definition for ADEs that is widely used in the medication safety literature.12–14,18,23,25,31 For individual events, the reliability of assessment for the presence of an ADE was excellent (κ = 0.97). The authors also specifically asked about thresholds for treating pain. While there is not a consistent guideline for this in anesthesia, we obtained consensus from acute pain medicine subspecialists regarding tolerable pain thresholds and also looked to the pain literature, which consistently shows that 3 to 4/10 pain is a significant treatment threshold.32–34 We used a threshold of sustained pain greater than 4/10 (or greater than or equal to 5/10) to be flagged by the observers for later review by the adjudication committee (along with observer notes, chart review, and subspecialist consultation as described above) to determine whether or not the incidents were ADEs and whether they were associated with a failure to treat ME or consistent with standard practice and patient management goals (ADE without ME).The authors compared our results to those of published ME studies that use facilitated incident reporting to identify MEs.4,8 We do not believe that these are valid comparisons for several reasons. First, evidence shows that incident reporting vastly underrepresents true error rates.2,3 Flynn et al.3 performed a study comparing ME detection rates on hospital wards using three different methods: incident reporting, retrospective chart review, and direct observation. Their ME definition was the same for all three methods. Of 2,557 medication doses administered, they found 456 MEs (17.8% ME rate) by direct observation, 34 (1.3% ME rate) by chart review, and only one (0.04% ME rate) by self-reporting via an incident reporting system.3 Second, the studies referenced by Bowdle et al. looked at a subcategory of MEs that occurred during medication administration, while our study looked at MEs that occurred during the entire medication use process. The most validated, established, and widely used definition of ME across specialties involves the entire process, as described above. Third, most of the literature that the authors reference reports MEs per anesthetic (not per medication administration), which is a different denominator. The most validated method for measuring error rates is the number of errors per medication administered.3,9,19,20,31,35 Measuring MEs per anesthetic represents another approach, but it can be difficult to interpret as different anesthetics involve different numbers of medications administered. Also, it does not provide information on medication administrations that are without error if they occur during an operation that contained an error.Our results can more accurately be compared with observational studies such as those of Merry et al.,9 who conducted a study in five operating rooms in a tertiary academic center in New Zealand and found a perioperative ME rate of 11.6% in a study group that used conventional nonelectronic methods for anesthetic record keeping and 9.1% in a study group that used a multimodal record-keeping system that was designed in-house. Merry et al. used a definition of ME that included errors related to administration, documentation, and omission or “failure to give an intended medication,” and they reported MEs per medication administered. When Bowdle et al. referred to Merry’s study as having a lower ME rate than ours, it was because they subtracted MEs from Merry’s study results that were not direct medication administration errors. However, like us, Merry et al. counted errors along the medication administration process, such as documentation errors, as their primary outcome, and their reported ME rate was higher than ours.In their letter, Bowdle et al. also subtract MEs from our results to arrive at a new ME rate that only includes direct administration errors and uses the number of anesthetics (or patients) as a denominator instead of the number of medications administered. As described above, we disagree with this definition of ME and with reporting the rate per operation. Their calculation assumption of no more than one ME per patient in our study is not accurate. In fact, of 277 observed operations on 275 patients, 154 (55.6%) did not contain an ME or an ADE, 82 (29.6%) contained 1 ME and/or ADE, 23 (8.3%) contained 2, 13 (4.7%) contained 3, and 5 (1.8%) contained 4 or more.The authors also commented on the use of a bar code–assisted syringe labeling system and electronic anesthesia information management system at our institution, both of which were available during our study observation period and described in detail in our manuscript. While measuring the effect of these systems on ME rates was outside of the scope of our study, we agree with the authors’ statement identifying the potential for these systems to lower ME rates and describe this in the manuscript discussion. We also agree with the authors, and our study results support that simply having these systems in place does not ensure that they are properly used.Ibinson et al. posed questions about whether patient harm was associated with the errors that we observed. Errors often do not result in ADEs. As outlined in the results of our manuscript, 40 (21%) of our events involved patient harm/ADEs without an ME, and the remaining 153 events were MEs that were associated with an observed (N = 51, 33.3%) or potential (N = 70, 45.8%) ADE or had little potential for harm (N = 32, 20.9%). The most important areas of consideration for solutions to improve patient medication safety are not only preventable, observed ADEs but also potential ADEs. For example, if required perioperative antibiotic doses are missed in a group of patients, we do not focus only on those who develop an infection (ADE) and conclude that it is acceptable to skip antibiotic doses in patients who do not develop an infection (potential ADE). Whether a potential ADE turns into an ADE is often based on luck and uncontrollable factors, and it is important to consider both actual and potential ADEs for improving patient safety. In fact, it is a necessary and standard practice to report potential ADEs along with ADEs in the medication safety literature.12,17,19,25,31 Thus, we classified all observed and potential ADEs on a previously validated and widely used severity scale of harm, ranging from significant to serious to life-threatening.12–14,19,20,25,31 While we used the terms significant, serious, and life-threatening to remain consistent with established standard severity reporting methods in the medication safety literature, the words are most meaningful when linked to the definitions and examples provided in the manuscript, which describe these three levels as corresponding to “little threat to the patient’s function,” “some threat to the patient’s function that is not life-threatening,” and “life threatening,” respectively. The severity assignments were made by our adjudication committee, whose members have extensive experience using this classification system in ME studies, and their interrater reliability for these assignments was high (κ = 0.85).While adverse events in medicine are often multifactorial and can be due to a combination of errors that align to produce patient harm, many ME studies report ADEs associated with MEs without classifying by attributability.12,17,19,25 We used the Naranjo algorithm36 to assess the likelihood that observed ADEs associated with MEs were attributable to those MEs, and found that only 1 (2.0%) ADE was doubtfully due to the error. Ibinson et al. requested a table to show ADE counts by attributability. Their outlined table does not include errors with potential for harm (near misses), so we have added data on these important errors to the table (table 1).Similar to Bowdle et al., Yieshan et al. raise questions about our definition of ME. While the definition of ME has been broadened in different studies to include all medical errors,23 our study definition specifies that these MEs must occur during the medication use process, which, as described above, is consistent with the established literature on MEs. One of the ME examples the authors questioned was failure to check blood pressure before a patient receives an induction dose of propofol, which is a monitoring error in the proprofol administration process, as described above. Monitoring-related MEs have been included as MEs in medication safety research across specialties throughout the literature.13,14,18,19,23The authors also questioned why our ME rates are higher than those in the existing self-reports in the perioperative literature, which we have addressed in detail above. They specifically asked about whether we considered clinical context. Our adjudication committee reviewed all observer data and patient charts and consulted subspecialists in order to provide the clinical context required to exclude incidents that were flagged by observers but may not have been errors, not had potential for patient harm, or were consistent with standard patient management or specific patient care goals, on a case-by-case basis.Finally, the authors questioned whether our results are sufficient to endorse specific ME solutions. We do not endorse any specific solutions because we agree with the authors, and our report explicitly states that future analysis is necessary to evaluate process- and technology-based solutions that may address the root causes of the MEs to reduce their incidence. As described in our manuscript, we did not directly test any solutions. We suggest, based on our judgment and considering error type and root cause, the numbers of MEs in our sample that have the potential to be eliminated by various solutions, in order to identify solutions that may deserve further consideration and testing. After such solutions are designed, similar studies should be repeated to determine whether or not they reduce the incidence of MEs.It is important and not uncommon to raise questions when presented with new research on MEs,37–43 especially when the research receives attention in the mainstream media. For example, while their scope was much larger than ours, many questions were raised about error definitions and classifications after the Institute of Medicine’s report To Err Is Human was published.37–40,44 We welcome the opportunity to further discuss our findings and the critical importance of using ME, ADE, and severity definitions, as well as research methodologies, that are validated, objective, and consistent with existing cross-specialty ME research. This will allow existing and future results and solutions to be compared, leveraged, and shared, and their impact on reducing ME rates to be accurately measured. Future research should focus on conducting similar studies at other academic centers with consistent definitions to see if these results are representative, and at other sites such as ambulatory surgical centers and community hospitals, and most importantly before and after the introduction of solutions to determine whether or not they reduce the incidence of MEs.Supported by grants from the Doctors Company Foundation (Napa, California) and the National Institute of General Medical Sciences (Bethesda, Maryland) of the National Institutes of Health (Award Number T32GM007592). The content is solely the responsibility of the authors and does not necessarily represent the official views of the Doctors Company Foundation or the National Institutes of Health. Neither the Doctors Company Foundation nor the National Institutes of Health had any role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.Dr. Bates is a coinventor on Patent No. 6029138 held by Brigham and Women’s Hospital (Boston, Massachusetts) on the use of decision support software for radiology medical management, licensed to the Medicalis Corporation (San Francisco, California). He holds a minority equity position in the privately held company Medicalis. He serves on the board of SEA Medical (Emerald Hills, California), which makes technologies that can identify medications in solution. He receives equity and cash compensation from QPID, Inc. (Boston, Massachusetts), a company focused on intelligence systems for electronic health records. The other author declares no competing interests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call