Abstract

The concept that one should consider all of the available evidence in order to decide whether a particular exposure can cause a particular disease has been increasingly challenged in recent years, both scientifically and politically. The scientific challenge has come from modern ‘causal inference’ methodologies which focus on the randomized controlled trial (RCT) as the scientific gold standard, with other types of evidence being downgraded by algorithm-based methods for evidence synthesis. One major example of this approach is the use of packaged algorithms for assessing studies, such as GRADE, and ROBINS. The political challenge has come largely from various vested interests (including industry) which have skillfully employed calls for greater accountability in science that have resonated in the present US administration – and supports methods that downgrade types of evidence that are inconvenient. This development has produced considerable concern about pressures to exclude most epidemiological evidence from consideration by regulatory and advisory committees, thereby weakening regulatory standards. In fact, modern ‘causal inference’ methods emphasizing emulations of randomized controlled trials (RCT) do not provide the gold standard, but are just a part of the epidemiological toolkit. Similarly, algorithm-based methods are just a part of the toolkit of methods that can be used for evidence synthesis. When used carefully, they may assist the assessment of possible biases in studies of some particular exposure-outcome associations. However, when used inappropriately to score studies, and to reject evidence on the basis of such scores, these algorithm-based systems have considerable potential for harm, both to science and to the public health.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call