Abstract

As physicians, we aim to make clinical decisions based on evidence. There are many definitions of evidence-based medicine. Some restrictive definitions understand it as a process to systematically find, appraise, and use research findings as the basis for clinical decisions. The term evidence-basedmedicine has changed. Now the concept also includes the careful balance of risk and benefits associated with treatments or diagnostic tests, taking into account each patient’s unique circumstances, including baseline risks, comorbid conditions, and personal preferences and values. When appraising the evidence, unfortunately we must accept that evidence does not mean certainty. Thus, a careful review of the available data is imperative. The hierarchy of evidence has been established based on the methodological character of studies, with the highest-quality knowledge coming from meta-analyses and randomized clinical trials (RCTs). It is clear that RCTs provide the best assessment of the effect of a therapy or an intervention because the randomization provides an unbiased allocation of treatment. Randomization guarantees that even if the groups under study will not be identical in terms of all relevant (known and unknown) prognostic factors, such differences will be due to chance. Thus, the statistical theory based on random sampling can be used to calculate confidence intervals that express the potential magnitude of such effects. However, sometimes data obtained from RCTs are incomplete, contradictory, or absent. Furthermore, the likelihood of the success or failure of the intervention is not identical in all individuals in the trial because the therapy under study is usually not the only determinant of outcome and different patient characteristics could act as effect modifiers. In addition, the temptation to over-interpret secondary analyses can be irresistible and can lead to faulty conclusions. It has been well described that clinical trial participants tend to be younger, more motivated, and to have fewer comorbid conditions than patients in the general population, in part as a result of strict eligibility criteria. This, in addition to the underrepresentation of elderly patients and ethnic minorities, limits the generalizability of the results of RCTs. Despite the indisputable role that RCTs play in the generation of new knowledge, observational studies have helped us establish key causal relationships. Important data continue to be obtained from large prospective cohort studies, such as the Framingham Heart Study, theNational ChildDevelopment Study, theNurse’sHealth Study, theWomen’s Health Initiative, among others. Observational studies are increasingly being used for comparative-effectiveness research. Despite their limitations, they are particularly helpful when RCTs cannot be performed for ethical reasons, when randomization is not feasible, or when existing clinical trials may not be relevant to the population of interest. Observational studies, when carefully designed, can provide critical information related to real-world applicability, rare conditions, understudied populations, uptake of new technologies and treatments, long-term complications, as well as benefits, cost, and toxicity of a treatment or an intervention in specific subsets of patients. Despite the strengths of experimental and observational designs, for some, the debate in terms of the battle between RCTs and observational studies remains open. Different studies have compared the conclusions of RCTs and observational studies in a number of clinical topics. Concato et al and Benson et al, in their comparisons, observed that the results from well-designed cohort studies and casecontrol studies did not systematically overestimate the magnitude of the association between exposure and outcomes when compared with RCTs. More recently, the apparently contradictory results between the observational studies and the RCTs on the effect of hormonereplacement therapy in coronary heart disease and breast cancer opened this discussion again. However, reanalyses of the data revealed that the reasons for the discrepancies were not caused by the design of the studies but rather the time of initiation of hormonereplacement therapy and the time of menopause. So, why do we sometimes see divergent results between RCTs and observational studies? How can we explain the inconsistencies? Is observational data suffering a credibility crisis? What should we do when only observational data are available? It is well known that nonrandomized comparisons can provide misleading estimates because selection biases can have an impact on results and threaten the validity of these studies. Our group explored the effect of selection biases in observational studies of treatment effectiveness in cancer care using the linked SEER-Medicare database. When evaluating the mortality of patients with prostate cancer treated and not treated with androgen deprivation and the mortality of patients with colon cancer treated with and without fluorouracil-based chemotherapy, the observational data produced improbable results. Selection biases, both on extent and aggressiveness of the tumor and on the underlying health of the patients, probably play a role. The

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.