Abstract

Adverse drug events (ADEs) are a common and costly problem. For example, an estimated 1.5 million preventable ADEs occur in the USA annually at a cost in excess of US$4 billion [1]. Older adults are often at particular risk owing to accumulated medical comorbidies, complex medi cation regimens, declining functional status and physiologic changes associated with less-predictable pharmacokinetics and pharmacodynamics [2,3]. One proposed ADE-reduction strategy involves the development of algorithms to identify patients at high risk. A recent example is the GerontoNet score, an algorithm designed to predict ADE risk among hospitalized older adults [4]. As is common in ADE risk algorithms, the number of medications taken by the patient dominates the score: receiving eight or more medications yields 4 points, and between five and seven medications garners 1 point. Then, 1 additional point is received for each of the following: heart failure, renal failure, liver disease and four or more medical conditions. Finally, patients with a history of prior ADE receive 2 points, which yields a potential score from 0 to 10. There are at least two general applications where ADE risk algorithms, such as the GerontoNet score, could prove to be effective. The most commonly described approach involves targeting high-risk patients for a concurrent or retrospective review by another clinician, often a clinical pharmacist, who would make recommendations to the primary prescribers. A second approach is to make ADE risk scores available via real-time decision support systems, which could assist prescribers in making risk–benefit decisions for new medications, choosing optimal dosing schedules and implementing appropriate monitoring plans. So are there any reasons why we should not be rushing to implement ADE risk algorithms in practice? After all, it seems pretty straightforward. Make scores available in real-time to prescribers and dispatch clinical pharmacists to patients with high scores – and the problem is solved. While risk algorithms are promising, there are several reasons why their use may not translate into significant ADE reductions in real-world practice. First, the utility of ADE risk algorithms in guiding clinical intervention is contingent upon the magnitude of risk stratification achieved. For example, ADE frequency was 2% with GerontoNet scores of 0–1, and gradually increased to 21.7% with scores ≥8 [4]. Unfortunately, this impressive level of stratification has not been consistently replicated. Some prominent investigators have even concluded that ADE risk stratification approaches are unlikely to be productive [5]. There are many potential reasons why ADE risk algorithms may not demonstrate consistent risk stratification. Perhaps the most fundamental is that risk prediction is not Expert Rev. Clin. Pharmacol. 4(6), 655–657 (2011)

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call