Abstract

Arguing against the Proposition is Sarah L. Gulliford, Ph.D. Dr. Gulliford obtained her Medical Physics Ph.D. in 2002 from The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton, Surrey, England. After spending three years as a Radiotherapy Physicist at Suffolk Oncology Centre, Ipswich Hospital, Suffolk, she returned to The Institute of Cancer Research, where she is currently a Staff Scientist. Dr. Gullifordˈs main research interest is the development of scientific methods to analyze the response of normal tissues to radiotherapy. Using her expertise on trials analysis, she is a member of the NCRI National Radiotherapy Trials QA group and associated Database and Information Technology sub group. If the past is anything to go by, future radiotherapy practice will need to consider increasingly available innovations and treatment options and greater processing of patient data (far exceeding human cognitive capacity).1 A greater number of patient-specific decisions will need to be made, accommodating patient and tumor heterogeneity,2 understanding of which is in constant evolution.3 To guide radiation oncologists and patients through this, evidence is needed. In an ideal world, this evidence would come from prospective randomized trials: Randomize into matched groups, control every aspect of the treatment, and collect data with perfect quality and with enough power to detect a change in outcome if one is present. Then do this for every decision you ever wanted to make, in every patient group you can think of and keep up with innovations in the field. Sounds impossible? That is because it is. Cancer research and innovations have created an explosion of things we know about a patient and an explosion of treatments we can give to a patient. We cannot possibly trial our way through every combination. To make matters worse, in our technology-driven radiation oncology community the evidence is less vigorous than in many other disciplines because medical devices progress with incremental innovations and have a shorter development cycle compared with drugs. Furthermore, ethical considerations sometimes prevent the evaluation of medical devices in a randomized clinical trial.4 The consequence of this evidence gap can be seen in current radiotherapy practice; when asked the question, “What will be the outcome of this treatment in this patient?”, the answers radiation oncologists give are very close to a tossup.5 Where do we go from here? How do we close the evidence gap? We are sitting on a pile of retrospective patient data in our treatment planning systems, oncology information systems and electronic hospital records. Whenever we introduce something new, we generally collect data on this as well. What if we could supplement our clinical trial evidence base by learning something from this retrospective clinical data and use that knowledge to change our local practice. This is a concept now commonly called Rapid Learning?6 The only real obstacle standing in Rapid Learningˈs way (and retrospective analysis in general) is the amount and quality of the data to learn from. By linking clinical data sources across institutes these problems can be minimized. By linking data we can (a) learn from more patients, (b) identify data quality issues, and (c) learn from our differences in practice such as the use of different technologies. Together, we have the data to inform many of the decisions radiation oncologists face every day. Further, tools are on the way to learn unbiased, reliable evidence from the data in a privacy-preserving manner.7 Will future practice be based on evidence from retrospective linked clinical data sources rather than prospective randomized controlled trials? Yes, because we have no other choice. We need to learn from all patients we treat. Only then can we hope to help patients and oncologists make evidence-based choices for personalized radiotherapy treatment. Prospective randomized controlled clinical trials are the backbone of medical research. They provide a framework in which new treatments can be tested for safety and efficacy. Utilizing this framework reduces uncertainties by limiting the variation in patient characteristics; standardizing procedures and follow-up data for comparative studies of “experimental” vs “standard” treatment.8 Evidence from a randomized controlled clinical trial is regarded as Level 1 Evidence,9 the best available. Changes in radiotherapy treatment practice should be based on the best available evidence and as such should have their foundation in clinical trials. There are many benefits to clinical trials. Clearly, conducting a clinical trial ensures rigorous procedures and quality assurance to yield high quality, consistent data from both single- and multi-institutional studies. For each trial, the standardized case report forms ensure that the same validated questions and corresponding grading are collected for all patients. This facilitates true comparisons between patients regardless of where they are treated. There are also positive spin-offs from conducting clinical trials. In the case of multicenter trials, the audits and support given to participating centers often aid the safe introduction of new techniques into the clinic. Properly conducted randomized trials are costly both financially and in terms of resources, but the quality of information produced will drive changes in radiotherapy practice with confidence. A classic example of the benefit of using clinical trial data is the Parotid Sparing trial PARSPORT,10 one of the few fully randomized comparisons between conventional vs intensity modulated radiotherapy (IMRT). The subsequent analysis of the trial demonstrated clearly that IMRT was beneficial in terms of reducing xerostomia. In addition it was observed that acute fatigue was more prevalent in the IMRT cohort. Analysis of both dosimetric and clinical data has suggested that the explanation may be due to the dose to central nervous system (CNS) structures including the brainstem and cerebellum.11 As with any scientific finding this result requires independent validation but is illustrative of what can be observed in the clinical trials context. I postulate that this particular needle would have been lost in the haystack of linked whole-institution databases. The PARSPORT trial is also credited with facilitating the implementation of IMRT for head and neck cancer in a number of centers in the UK. Indeed it is recommended in the Institute of Physics in Engineering and Medicine report “Guidance for the Clinical Implementation of Intensity Modulated Radiation Therapy”12 that IMRT should, where possible, be implemented through participation in a phase III clinical trial. As radiotherapy practice continues to evolve the importance of clinical trials must not be underestimated. Evidence based medicine is “… integrating individual clinical expertise with the best available external clinical evidence from systematic research”9 and we should all strive for that in radiotherapy. So my esteemed opponent was right to state that “changes in radiotherapy practice should be based on the best available evidence.” I have tried to point out the growing impracticality of obtaining this external evidence via randomized controlled clinical trials, as have others.13 In this rebuttal I would like to focus on the integration of external evidence with clinical expertise and why that is a problem in trials. Clinical expertise in a technology-intensive discipline such as radiotherapy is a concern. Staff need to be trained, QA programs are needed, etc. A case in point is the HeadSTART trial14 which found that “poor radiotherapy can greatly exceed the anticipated benefit of concurrent chemotherapy.” I do not think these investigators would agree with my opponentˈs suggestion to use trials to gain clinical expertise, because they went on to recommend that “to achieve quality radiotherapy, participation in trials should be limited to sites that can contribute a significant number of patients.” This illustrates the dilemma that, to find the proverbial needle in the haystack (and my opponent mentions a very interesting one), one needs to apply a level of radiotherapy in trials which is not representative of common clinical expertise. One cannot help but wonder if finding the needle has become more important than evidence based medicine. Because our delivery process is technology and not drug-based, I believe that the evidence we generate should be tightly integrated with our clinical expertise. Linked clinical data sources allow early adopters to implement new technologies and create external evidence, and permit others to critically evaluate that evidence and use it to improve their local clinical expertise. For me, that is evidence-based radiotherapy. My colleague suggests that it is impossible to conduct a prospective randomized trial for every improvement in radiotherapy practice and he is correct. However that does not mean that future radiotherapy practice will not be based on evidence from prospective randomized controlled clinical trials. The process of conducting a clinical trial allows issues of feasibility to be addressed; the resulting evidence provides confidence for centers to “join in” with a clearly stated methodology. If, for example, the intention of the study is to escalate to a higher prescription dose or a more extreme fractionation than has been implemented previously, no amount of data mining will accurately predict the outcome or toxicity profile of the affected patients. The concept that it is “unethical” to perform clinical trials has a very weak foundation. Every new technique should be comprehensively audited before implementation in the clinic, with results compared to previous practice. My opponent cites a reference (4) which states that the use of randomized controlled trials to demonstrate superior efficacy “can be unethical if applied to many medical devices.” The two examples given are a software development and an improvement in a beam model. Neither of these should ever be tested on patients, but instead should be evaluated using dosimetric verification and comparison. Clinical questions can be answered effectively using clinical trials. I do not believe that the timeline of clinical implementation is too short to conduct trials. There will always be a period of uptake of new technology15 and, while it may seem obvious that a new technique is better, it is always prudent to check. Randomized controlled trials do not match every variable possible (that is a case control study). Instead, clinical trials choose a cohort of patients who would likely benefit from a new technique. These are rich and varied datasets which can be mined with the best tools available to uncover hidden information that complements the validation of the best innovations in radiotherapy practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call