Abstract

In an outstanding paper, Zwart-van Rijkom et al.[1] address a challenging research area with an important clinical relevance. They have set out to measure the frequency and nature of drug–drug interaction (DDI) alerts in a Dutch university hospital. They retrospectively analyzed all prescription records of patients hospitalized in 2006 who were prescribed at least one medication. Because there was no record of which alerts were actually given to the providers, they reconstructed these alerts by combining the professional guideline for the management of DDIs (G-standard) with the historical prescriptions. The study quantifies, for the first time, the frequency and nature of (possible) DDIs in Dutch hospitals, and contributing to the body of evidence known about DDIs and DDI alerts. The authors also offer an insightful explanation for their finding that the 10 most frequently encountered DDIs for adults in hospital were associated with medications that are used and initiated on a large scale in the community setting. The explanation is that the G-standard itself has primarily been developed for use within community pharmacies. The authors conclude that more sophisticated clinical decision support systems can improve the specificity of the alerts to combat ‘alert fatigue’. Computerized physician medication order entry with decision support capabilities can indeed have a positive impact on hospitalized patients [2] and we second the authors' conclusion. We would like to draw attention to an important observation, not explicitly made in the paper, nor in the earlier work on drug–drug interactions that it cites, concerning the acceptance status of the alerts. One cannot reconstruct all the real generated alerts from the historical prescriptions in the setting of this study. In fact any accepted alert is missed in the simulation simply because it either resulted in the cancellation of the prescription or in prescribing alternate medication: the original attempted prescription is no longer there to reconstruct the alert. This means that the objective ‘to measure the frequency and nature of DDI alerts’ is, strictly speaking, not attainable. The study should hence be viewed as an attempt to ‘measure the frequency and nature of ignored DDI alerts’. We will not know the proportion and nature of the accepted alerts and hence statements such as ‘10% of all prescriptions generated a DDI alert’ should be taken to set a lower limit on the true proportion of all generated alerts by the system. Targeting the ignored alerts, however, is easily motivated: one may be interested in the potential harm to patients that was not successfully intercepted by the system. This, however, means that the reported frequencies and nature of DDIs strongly depend on the effectiveness of the alerts and the underlying drug–drug interaction database used. If the accepted alerts are considered as well then the frequencies of important DDIs in the study may change. In consequence, for hospitals not using the G-index, developers of alerting systems may have somewhat different DDIs to target. What is more, accepted alerts may play an additional (serendipitous) important role in alert design within decision support systems: relatively frequently accepted alerts (e.g. with acceptance rate of 40%) can empirically signify DDIs perceived as genuinely important by the providers. Hence these interactions may merit higher priority to be targeted (in, say, the rest of cases in which they are ignored) than other DDIs. This may reduce the number of alerts and alert fatigue and boost the effectiveness of the targeted alerts. In summary, both ignored and accepted alerts merit investigation but knowledge of their acceptance status bears different meanings of the DDIs found as well as different implications for the design of alerts in computerized decision support systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call