Abstract

Spontaneous reports (SRs) of suspected adverse drug reactions (ADRs), known as ‘Yellow Cards’ in the UK for over 50 years, are used in most countries of the world. The total number of countries that have contributed to the World Health Organization (WHO) database of such reports, administered so effectively at the Uppsala Monitoring Centre (UMC), is 122. In 2014 and the first 9 months of 2015, 110 different countries contributed reports, and now there are over 11 million of them. The medical/scientific communities have expressed different views about their value. I recall Tom MacDonald of Dundee saying ‘‘The plural of anecdote is not data’’ in the context of SRs. Others have held views that are the absolute contrary, suggesting they are the strongest evidence when it comes to assessment of harm. It has been well argued that individual reports can sometimes provide convincing evidence of drug-caused harm. The idea of an anecdote as evidence has been shown by Aronson [1], among others. When it comes to dealing with large numbers of such anecdotes, they have for over 30 years been collected into databases such as that at the UMC. The last two decades have seen development of statistical methods to filter the large number of reports that arrive at regulatory authorities or companies and are included in the databases. There is probably agreement in the wider community that the occurrence of a number of reports can be ‘signals’ of possible harm, though not necessarily sufficient evidence to demonstrate that the adverse event (AE) is a true adverse reaction (caused by the drug). A good question is whether they are ever anything more than a possible indication of an adverse reaction. The accompanying paper by Macia-Martinez et al. [2] takes an interesting approach to examine the utility of the statistical methods used to assess them. The authors used the proportional reporting ratio (PRR) [3], which is a simple measure of disproportionality—that is, how much more frequently this combination of a particular drug reported in association with a particular event occurred in the database than might be expected if there were no association between the drug and the event. They calculated this measure for a set of drug/AE combinations that had been subject to regulatory action, so experts had concluded there was reason to believe that the association was causal. In addition, for a drug/AE combination to be included in the series of 15 cases, there had to be accompanying epidemiological evidence, as well as SRs of the association. The authors looked at the relation between the measure of disproportionality and a measure of the relative risk (RR) for that drug from the totality of the available epidemiological evidence. Depending on your prior view, the fairly strong relationship they found may be a surprise or just as expected. It may be noted that, on average, the PRR overestimated the RR. Starting with cases in which action was taken does not address the issue of whether the primary observation that a PRR (or the value of another measure) is high is evidence of causality on its own. A key phrase used by the authors was ‘‘should the signal be confirmed’’. The magnitude of the PRR may be useful at a stage when further evidence has been gathered to confirm or refute the signal. This study & Stephen J. W. Evans stephen.evans@lshtm.ac.uk

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call