THE RANDOMIZED CONTROLLED TRIAL (RCT), THE GOLD standard for evaluating the balance of risk and benefit in medical therapies, first emerged as a key clinical research tool in the mid-20th century thanks to visionary leadership of agencies such as the US National Institutes of Health (NIH), the UK Medical Research Council, and academic research institutions. Since then, clinical trials activity has shifted from the NIH and academia into the purviews of the medical products industry and regulatory authorities. Recent emphasis on evidence-based medicine, patient-centered outcomes research, and learning and accountable health care systems underscores the fact that most clinical trials fail to provide the evidence needed to inform medical decision making. However, the serious implications of this deficit are largely absent from public discourse, and a better balance between commercial interests and public health is critically needed. In this context, the word “academic” describes a diverse group of individuals exploring scholarly questions, with motivations arising not from pursuit of profit but from an interest in the public health. The wellsprings of such activity should reside in the NIH and other federal research agencies, coupled with external grantees—most of whom work in the nation’s academic health and science systems. The primary mission of academic health and science systems, as required by their not-for-profit status, places public health interests above return on investment to specific investors. This in no way disparages the motivations of the many excellent scientists working in industry or in the US Food and Drug Administration (FDA) that regulates its products. On the contrary, the development and appropriate use of medical technologies are vital to the public health. However, it is important to remain mindful that decisions affecting forprofit research will reflect fundamentally different priorities than those of academic researchers. In a sense, the academic enterprise seeks a “return on investment” for taxpayers who grant (and support) its nonprofit status. Thus, while all researchers are bound by the same standards when conducting human studies, the fundamental questions addressed will differ, and failure on the part of academia to increase its involvement will have predictable consequences. When fundamental trials methodologies were being developed at the NIH in the 1960s, an NIH-commissioned task force delineated recommendations for organizing and conducting RCTs. One significant early example is the Coronary Drug Project, a joint effort among NIH sponsors, an academic coordinating center, and a steering committee of academic leaders. In the 1970s and 1980s, the NIH often convened academic leaders to identify knowledge gaps and prioritize and conduct specific trials as funding permitted. During the 1960s, there was scant statistical literature examining clinical trials methodologies. Researchers learned by doing trials, noting successes and failures, and iterating to advance the field. In a series of discussions in the 1970s, ideas were debated and solutions to immediate problems were proposed. Throughout the 1970s and 1980s, NIH and academic biostatisticians developed many methods now in routine use, including sample size estimation, interim data monitoring, and repeated measure methods for analysis. At the outset of this era, few large randomized clinical outcomes trials were sponsored or conducted by industry. Meanwhile, the FDA was developing biostatistical teams to support review of new drug and device applications and increasingly demanded RCTs, often with clinical outcomes end points, as the standard for approval and labeling. By the early 1990s, academia was working with industry to lead and conduct clinical trials, using a modification of the NIH organizational structure. While some fields developed this hybrid model of academic leadership in industry-sponsored RCTs, most industry trials evolved in a different direction and were designed by industry scientists in concert with regulators, often with little or no independent academic input. As the clinical trials enterprise grew, statistical principles mandated adequate sample sizes to provide power for detecting typically modest differences in clinical outcomes. Concurrently, the enterprise’s rapid expansion, coupled with egregious instances of fraud or lapses in quality, resulted in the implementation of auditable data systems. This confluence of factors spurred massive increases in clinical trial costs.
Read full abstract