Abstract

Starting in early recorded history, physicians knew that cancer would usually return after surgical removal. Galen, a Greek physician from the second century, viewed cancer much like his predecessors, considering patients incurable once a diagnosis of cancer was made. Unfortunately, his views set the pattern for cancer management for centuries and is partially why cancer treatment has undergone slow development relative to other therapeutic areas. In addition, the complexity of cancer has required the development of new approaches. One possibility for slow uptake of new methodologies may be that inferred from the work of Tversky and Kahneman, who studied the process of making judgments under uncertainty. Making judgments about the probability of an event under uncertainty generally makes use of “representativeness,” or the degree to which an event has essentially similar (ie, representative) characteristics to its parent population and reflects salient features of processes that generated the observation. Early dose selection for anticancer agents was based on body size based on perceptions of safety and treatment individualization regardless of whether patient size influenced drug exposure, and this trend is still evident. Modeling and simulation (M&S) approaches are frequently used during drug development, but application of these approaches to anticancer drug development has been slower than in other therapeutic areas. To better understand and appreciate the application of M&S and appropriate dose regimen selection in oncology, it is helpful to look at the history of chemotherapy and the evolution of anticancer drug development. Historical perspective also helps set goals for future development. The concept of treating cancer with pharmacologically active agents began in the 1940s with nitrogen mustards and folic acid antagonists. Nitrogen mustards arose from chemical warfare (particularly sulfur mustard gas) during World War I. Pathology and autopsy reports from mustard gas victims in World War I1 and from the explosion of mustard gas bombs on an Allied ship in Bari Harbor in World War II2 showed that profound lymphoid and myeloid suppression occurred after mustard gas exposure. Alexander, a Bari Harbor investigator, theorized that because mustard gas exposure halted the division of certain somatic cells, it might be used to suppress the growth of certain types of cancerous cells.3 Based on this, Goodman and Gilman reasoned that mustard compounds might be used to treat lymphoma. To stabilize mustard gas, they exchanged a nitrogen molecule for sulfur and created a more stable nitrogen mustard.4 Initially working with murine lymphoma models, they demonstrated the efficacy of mustard agents, followed by the injection of mustine (the prototype nitrogen mustard anticancer chemotherapeutic) into patients with non-Hodgkin's lymphoma.5 Temporary partial remissions were produced, but toxicity was profound, especially in patients with acute leukemia.6, 7 Nonetheless, this was the first realization that cancer could be treated by pharmacological agents.5 The first clinical trial results were reported in the New York Times.8 The identification of folic acid in 1941 as an essential vitamin,9 its synthesis in 1946,10 and its ability to reverse megaloblastosis11 suggested that folate might be useful in the treatment of acute leukemia.12 In 1947, Farber13, 14 and colleagues gave folic acid to children with acute leukemia and noted that folate accelerated the leukemia. The observation led to research into folate antagonists. Seeger et al15 synthesized the 4-amino antimetabolite of folic acid, aminopterin, and provided it to Farber for use in children with acute leukemia.16 The demonstration that another antifolate, methotrexate, was effective in childhood acute lymphoblastic leukemia led to the development of numerous anticancer agents that inhibit normal metabolic pathways.12 Remissions from single-agent treatments were brief but showed that antifolates could suppress proliferation of malignant cells. In 1951, Wright tested methotrexate in solid tumors, showing remission in breast cancer.17 Hertz and Li reported complete remission in women with choriocarcinoma and chorioadenoma in 1956,18 indicating that methotrexate alone could cure choriocarcinoma. In 1960, Wright et al reported remissions in mycosis fungoides treated with methotrexate.19, 20 New chemotherapeutic agents were introduced in 1953. A report by Burchenal et al21 that 6-mercaptopurine (6MP) produced remissions in patients with acute leukemia, especially children, led to its use in sequential and combination chemotherapy with a corticosteroid (usually prednisone) and methotrexate. Burchenal also noted that therapeutic resistance to 6MP developed more rapidly than antifolates. In 1959 cyclophosphamide, a nitrogen mustard prodrug with less severe thrombocytopenia, was introduced and shown to have value in lymphoid leukemia.22 In 1962, vincristine was shown to induce complete remissions of childhood lymphoid leukemia resistant to other agents.23 However, as with earlier agents, remissions were temporary, and relapse with resistant leukemia often occurred. Combination chemotherapy was introduced in 1958 by Frei et al,24 who noted that previous single-agent treatments resulted in transient remissions. Thus, it was reasoned that administering several drugs with different mechanisms in combination might reduce the likelihood of resistance and prolong remission. However, adverse events became problematic. As the number of potential agents increased, there were challenges to determining appropriate combinations and doses, and specialized clinical trial designs and statistical methods had to be developed to permit rapid assessment of chemotherapeutic agents and combination therapies. The incorporation of multiple cytotoxic agents, although rational in targeting multiple metabolic and mitotic processes, was limited by toxicity, which was often severe. The discovery and application of new drugs providing supportive care allowed for increased safety and higher dose intensity of cytotoxics, enabling reevaluation of existing dose recommendations. These drugs included uricosurics such as allopurinol and aggressive hydration to prevent crystallopathy, antiviral and antibacterial prophylaxis, HT3 and NK receptor inhibitors to minimize nausea and vomiting, and colony-stimulating inhibitors to enhance myeloid recovery. When effective chemotherapy was first introduced to treat leukemia, rapid lysis of leukemic cells resulted in serious and sometimes fatal metabolic disturbances. The introduction of allopurinol, together with fluid and electrolyte therapy, helped to resolve this.25 More recently, recombinant urate oxidase (rasburicase) was developed for the prevention and treatment of hyperuricemia.26 As remission durations extended, the impact of immunosuppression caused by chemotherapy was more evident. Varicella became a major problem, particularly with prednisone therapy.27, 28 This resulted in the use of plasma from adults recovering from herpes zoster (which is the same virus as varicella) for the prevention of varicella. After this treatment was found to be effective for prevention, varicella-zoster immune globulin was developed and demonstrated to be effective.29 The introduction of acyclovir in 1980 was an important aid in combating varicella.30, 31 Immunosuppression and mucositis associated with chemotherapy and radiation can result in serious and sometimes fatal mycoses.32 The introduction of amphotericin B in 195833 and of fluconazole in 199034 represented significant advances in controlling mycoses. However, some, such as aspergillosis, remain resistant to treatment and are major causes of mortality, especially in conjunction with prolonged neutropenia. In 1970, the concept of “leucovorin rescue” as a means of improving response to methotrexate while reducing fatalities and infection was introduced, using a pharmacologic rationale for the timing and dosage of both agents35, 36 to maximize tumor reduction and improve safety. In 1991, granulocyte colony-stimulating factor was approved for concomitant use with chemotherapy,37 showing faster return to pretreatment neutrophil counts and fewer chemotherapy-associated infections, thereby allowing more aggressive chemotherapeutic regimens to be tested with greater patient safety.38 Other agents such as epoetin alfa to treat anemia and agents to treat thrombocytopenia were investigated. Epoetin alfa was initially used to treat cancer- and chemotherapy-induced anemia but this use has been discouraged since 2008 owing to earlier reports of the pleiotropic effect of epoetin39 and reports of poorer survival in patients receiving epoetin.40 Gastrointestinal side effects including nausea, vomiting, and mucositis were associated with many of the early anticancer agents. In the 1970s, chemotherapy-induced emesis was considered a minor problem, and consequently, few antiemetic trials were published during this time.41 There were several antiemetic agents in use (eg, metoclopramide, antihistamines, dexamethasone, prochlorperazine), but they were not always effective. Several reports indicated that chemotherapy-induced nausea and vomiting not only caused severe discomfort, but could also produce physical lesions (mucositis).42 Mucositis can cause local infections, bleeding, and septicemia.43 Cunningham reported a dramatic reduction in the incidence and severity of nausea and vomiting in a cohort of patients who were refractory to first-line antiemetic agents at the time when they were given a novel selective 5-HT3 receptor antagonist (ondansetron) immediately before chemotherapy.44 Historically, many clinical trials in oncology studies have suffered from several deficiencies: many were open-label studies based on the concern for the safety of patients receiving unknown treatments, many were underpowered, and many did not include a comparator (standard of care or placebo when a new agent was added to existing therapy). This may be attributed, in part, to the concept of “availability,” introduced by Tversky and Kahneman,45 which states, in part, that decisions can be made based on perceived risk. Availability relies on immediate examples that come to mind when evaluating specific cases, concepts, methods, or decisions. Explaining hypothetical events makes an event seem more likely through creation of causal connections. Thus, despite treatments for many adverse events being administered regardless of what drug was administered, there was strong reluctance to run double-blinded oncology trials. The earliest reports of studies of chemotherapeutic agents by Farber were criticized because of questions raised about diagnosis, potential patient selection bias, relatively small patient numbers, and selective reporting. The lack of acceptance of these results underscored the need for an organized approach to treatment experimentation that would provide unbiased evaluations more acceptable to physicians. Although the benefits of some treatments are self-evident, as was highlighted in a humorous meta-analysis of the benefit of parachutes,46 this is rarely the case. Implementation of robust study designs in all phases of anticancer drug development has been slow. By the late 1950s, approximately 25 000–30 000 potential anticancer agents were being screened annually, with only 10 to 20 of these agents continuing to clinical trials. Armitage developed 2- and 3-stage screening procedures for assessing nonclinical results that allowed rejection at any testing stage, but acceptance only at the final stage.47 Initially, it was hoped that this program would identify a panacea for cancer. Although cancer comprises many different diseases, there was an underlying belief that they shared sufficient similarity so that activity in one form of cancer would likely be active in another (“representativeness”). Consequently, early clinical trials of new agents evaluated as few as 5 patients, terminating further development if no response was obtained. Clinical trials for new anticancer agents include nonrandomized phase 1 (dose-ranging), phase 2 (preliminary efficacy), and phase 3 (proof of efficacy via comparison of treatments) trials. Prior to 2004, randomized phase 2 trials were rare, instead using a “pick-the-winner” design to identify the most promising of several experimental regimens to be evaluated in phase 3.48 This was because of, in part, the concept of development schemes proposed by Gehan in 1961,49 as shown in Figure 1. Gehan's plan included a minimum number of consecutive patients to study in phase 1, when all patients are nonresponders at specified levels of rejection error, before discontinuing development. This was done to dissuade the conduct of very small phase 1 studies that were discarding potentially useful agents. Although the original proposal included phase 2 as having 2 parts, with a randomized “follow-up” study to assess clinical benefit, that component of the development process was not regularly implemented. The lack of randomized phase 2 trials may have contributed to the perception of a poor success rate for these agents. Although anticancer agents have a development success rate comparable with other therapeutic areas, nearly half progress to phase 3 testing, with only a 55% probability of a successful approval.50 The relatively high failure rate in phase 3 for anticancer agents resulted in high development costs reported. The first randomized clinical trial at the National Cancer Institute (NCI) began in 1955.51 The study compared intermittent and continuous methotrexate therapy. In the intermittent arm, the complete remission rate was 16%, and no patients survived for 1 year, compared with the continuous arm, which showed only a 15% complete remission rate, but 20% of patients survived 1 year. The survival advantage was attributed to the longer median duration of remission with continuous therapy. The study enrolled 65 patients and demonstrated that adherence to the proper principles of planning, conducting, and analysis was feasible, although with small patient numbers on each treatment only very large differences in response could be detected. In 1954, Congress created the Cancer Chemotherapy National Service Center (CCNSC) to stimulate work in cancer chemotherapy. The CCNSC oversaw and organized cooperative groups until 1976. CCNSC reviewed the principles of controlled clinical trials, including randomization and statistical analysis recommended by Lasagna.52 The cooperative groups agreed on the following: (1) combination of data from all institutions to rapidly accumulate specified patient numbers; (2) standardized criteria of diagnosis, treatment, and measurement of effect; (3) statistical design of the study, with randomized assignment of patients to the groups to be compared; and (4) statistical analysis and collaborative reporting. Based in part on CCNSC suggestions, subsequent studies by NCI and cooperative groups began to design and implement randomized clinical trials. By the mid-1960s, cooperative groups no longer tested only new anticancer agents; they tested hypotheses concerning therapy and pathophysiological mechanisms. Such evaluations became increasingly common in clinical cancer research, and by the 1990s, half of all phase 3 clinical trials conducted by these groups incorporated ancillary laboratory and correlative studies.53 The development of anticancer agents also necessitated the development of new statistical procedures, particularly how to deal with survival data. In 1963, Freireich et al reported the results of a prospective, randomized, double-blind, placebo-controlled sequential study of 6MP versus placebo in the maintenance of remission in pediatric acute leukemia.54 In the design of the study, patients achieved complete remission (part 1) with steroid treatment and were then paired at each institution by type of remission (complete vs partial) and randomized to 6MP or placebo (part 2), administered on a double-blind basis. The primary end point was duration of complete remission. The sequential study design was one of the restricted sequential procedures proposed by Armitage.55 The sequential design evaluated paired patients on each treatment, the trial terminated as soon as superiority of one of the treatments could be established. Here, the minimum number of pairs of patients was 9, with a maximum of 66; the study was halted at 18 patient pairs. The study established that 6MP maintenance treatment led to substantially longer remissions, but perhaps more importantly, it was the first to establish the concept that patients should receive treatment during remission, making it a predecessor to the concept of adjuvant therapy. Although the sequential pair design worked, more efficient analysis of treatment effect would be obtained by analyzing the actual lengths of remission. Gehan and Cox extended the Wilcoxon test for the fixed-sample-size problem, with each sample subject to arbitrary right censoring.56 Subsequently, Mantel used the chi-square test comparing survival data between 2 or more groups, creating a contingency table of deaths and survivors at each distinct failure time in the groups of patients under study.57 Cox evaluated a partial likelihood approach to the problem of right-censored data,58 comparing survival times among groups with adjustment for covariate values. Based on observations that some patient factors may be predictive of remission, Feigl and Zelen proposed a model in which an exponential survival distribution is postulated for each patient and the expected value of the survival time is linearly related to a patient factor (for leukemia, white blood count).59 A more general log-linear form of the model was developed by Glasser60; there were many subsequent developments in parametric regression models for censored survival data.61, 62 Thus, early studies not only promoted the concept of adjuvant treatment, but also served to develop new statistical analysis tools for right-censored data. Phase 1 studies of new anticancer drugs warrant special attention. Recent reviews of phase 1 oncology trials have noted that new dose-escalation study designs are rarely used63, 64; the traditional “3+3” design remains the most common design for oncology phase 1 trials.65 Dose escalation in phase 1 trials is designed to minimize the number of patients exposed to subtherapeutic doses and facilitate safe dose escalation. Dose-escalation methods for phase 1 cancer clinical trials fall into 2 broad classes: rule-based (or up-and-down) designs and model-based designs. The first up-and-down design was introduced in 1948 by Dixon and Mood66 and was rapidly adapted for use in anticancer drug development to become the 3+3 design. Such studies assign patients to incrementally higher doses based on prespecified rules using actual observations of events suggesting dose-limiting toxicity. Studies using rule-based designs have no underlying assumption of an expected maximum tolerated dose (MTD) and incorporate dose escalation and de-escalation during the trial to identify the recommended phase 2 dose. All rule-based study designs were developed in the era of cytotoxic drugs, when it was assumed that both efficacy and toxicity increase with dose. These relationships are typically represented by dose–toxicity and dose–efficacy curves, in which toxicity and efficacy increase monotonically with increasing dose. Conversely, model-based designs assign patients to doses based on estimated probability of achieving a target toxicity based on underlying models (eg, Bayesian priors) of the dose–toxicity relationship that is usually derived from nonclinical data or is based on published data from analogues of the investigational agent. Most model-based designs are modified to include restrictions such as the magnitude of dose increments, serving as safeguards to avoid exposing patients to undue harm. Model-based designs generally use all available data to model the dose–toxicity curve and provide a confidence interval for the recommended phase 2 dose. Some of the challenges presented by model-based designs include the need for statistical or pharmacometric expertise, the ability to rapidly update models, and requirements for expedited data collection. Thus, implementation of these designs is not always practical. In addition, the model may fail to reach the recommended phase 2 dose if prior distributions for the parameters of the dose–toxicity curve are inadequate or if prior assumptions in the dose–toxicity model are too restrictive.67 In the mid-1940s, the first anticancer mustard agents were administered as “flat doses” (in milligrams), with dose adjustments made based on individual tolerability and response. For subsequent agents, (6MP, vincristine), doses were adjusted based on body size; early trials of vincristine used a weight-based dose approach.68 The use of body surface area (BSA)–based dosing was initially used to derive safe starting doses of anticancer drugs from preclinical animal toxicology data. This practice was partly based on the work of Freireich et al in the late 1950s,69 which reported quantitative comparisons of toxicity of anticancer agents in mice, rats, monkeys, and humans, but was also perceived as an appropriate safety measure as well as a means of individualizing treatment. “Anchoring” is a term describing the tendency to rely on previous information (the “anchor”) when making decisions. During decision-making, anchoring occurs when individuals use prior information to make subsequent judgments. Once an anchor is set, there is a bias toward interpreting new information around the anchor. Thus, body size–based dosing became something of an anchor in chemotherapeutic dosing. “Allometric scaling” of drug clearance (and dose metrics) by the liver or kidney was derived from observations of resting oxygen consumption or basal metabolism of humans and animals large and small. Basal metabolism was shown to be related to body size by a power function of approximately 0.7,70 meaning that as weight increases by 100%, there is an expected increase in basal metabolic rate of about 62% based on the BSA estimation equation of Gehan and George.71 Therefore, using BSA in determining initial dosing of chemotherapeutics approximated the variation of drug clearance predicted from allometric theory. However, BSA is not well correlated with glomerular filtration rate72 or liver function.73 Other limitations of using BSA as the primary metric for dose adjustments include difficulties in treating obese,74 elderly, or cachexic patients,75 which led to the empirical practice of capping doses or using other metrics for body size such as ideal body weight or adjusted ideal body weight, or capping the BSA at 2.0 m2, for example. Despite studies confirming the safety and importance of using the recommended chemotherapy dosing, overweight and obese patients often receive limited doses,76 which negatively impacts their prognosis.77 Similarly, elderly patients had more frequently delayed dosing or dose reductions compared with younger patients. With the exception of oral toxicity, there was no difference in reported toxicity in the older patients compared with the younger patients, although this may be because of increased dose reductions and delays.78 Few anticancer agents carry limitations on dosing in the prescribing information; dose capping and other approaches are variable across different medical centers. There are only a few instances in the development of cytotoxic dosing regimens in the final decades of the last century in which dosing of cytotoxic drugs was capped. Vincristine is best known as a component of the treatment of Hodgkin's lymphoma and acute lymphocytic or lymphoblastic leukemia (ALL). Despite a typical dose of 1.4 mg/m2 used in the successful MOPP regimen for Hodgkin's lymphoma and ALL, the dose of vincristine is empirically capped by many prescribers at a maximum dose of 2 mg to avoid dose-limiting neurotoxicity.79 Prospective studies comparing complete remission rates in patients with and without caps on their vincristine dose showed a higher incidence of adverse events with the full dose.80 The introduction of liposomal vincristine allowed administration of even higher doses without corresponding increases in neurotoxicity. Numerous reports disputing the utility of BSA as a dose metric have appeared.81, 82 The pharmacokinetics of many anticancer agents are not related (or are not linearly related) to BSA (eg, a prospective study of epirubicin evaluating fixed dosing found no correlation with body size for either pharmacokinetics or neutropenia83). In 2002, Baker et al84 retrospectively evaluated the pharmacokinetics of 33 anticancer agents tested in phase 1 trials from 1991 to 2001 in adult patients. Based on their findings, they recommend that the practice of calculating starting doses using BSA in phase 1 trials be abandoned. A decade later, a review of the effect of body size on anticancer drug clearance found that only 56% of all models included a metric of body size as a covariate for clearance and that 52% of these cases used nonlinear relationships between clearance and weight.85 This argues that neither mg/kg nor mg/m2 dosing is appropriate, as both metrics assume linear relationships between body size and drug clearance. Despite these reports, BSA-based dosing for intravenous agents is still common.86 Although BSA dosing better approximates allometry than weight-based dosing, providing some justification for incorporating BSA, allometric scaling with body composition based on fat-free mass could improve dose individualization, particularly in the obese.87 Nevertheless, there are clearly other metrics that should be considered when determining appropriate doses for these agents such as genotypes. Another evolution in the dosing of chemotherapy agents is their administration in oncology clinics or infusion centers. Forty years ago, most parenterally-administered cytotoxic chemotherapy was administered to patients admitted to the hospital. With rare exceptions, chemotherapy dosing is now an outpatient procedure, and more orally administered anticancer agents are being developed. The ability to manage patients as outpatients is in large part because of the improved control of nausea and vomiting provided by HT3 and NK1 antagonists like ondansetron and aprepitant and improved understanding of how to provide protection from urotoxicity. The quick turnaround and emphasis on decreasing “chair time” provide a convenience for patients but limit the ability to obtain blood samples to assess drug concentrations achieved with the administered dose. Routine therapeutic drug monitoring (TDM) is performed only for high-dose methotrexate used for the treatment of osteosarcoma or ALL. It is worth noting that monitoring of methotrexate was historically performed to assure that sufficient duration and doses of leucovorin were provided to prevent life-threatening toxicity, not to assure that methotrexate exposure was optimized. In contrast, recent TOTAL protocols at St. Jude Children's Research Hospital include evaluation of methotrexate concentrations, with dose adjustments to attain target concentrations in subsequent doses.88, 89 Polymorphisms of drug metabolism have been demonstrated to affect the toxicity and efficacy of anticancer agents. St. Jude Children's Research Hospital was again an innovator in the prospective assessment of the thiopurine methyl transferase (TPMT) genotype to guide the dosing of 6MP and thiopurine.90 The recommendation to perform germ-line genotyping to guide dosing of cytotoxics to accommodate likely differences in the pharmacokinetics and resulting exposure has found its way into the US Food and Drug Administration (FDA)–approved labeling of several drugs (Table 1). Genotyping is not required, however, and the labeling of these drugs provides only vague guidance on how doses should be modified if mutations are identified. Irinotecan (CPT-11) is a prodrug metabolized by a carboxyesterase to the active molecule SN-38. SN-38 is conjugated by UGT1A1 to form the less toxic glucuronide. Ratain and colleagues showed higher toxicity from CPT-11 in patients with the (TA)7TAA (UGT1A*28) polymorphism compared with those with the usual 6-repeat, (TA)6TAA, although multiple other enzymes and transporters have been shown to contribute to the risk of toxicity.91 The UGT1A1*28 polymorphism has an approximately 26%–56% prevalence in the US population and varies among races, with blacks typically having a higher and Asians typically a lower prevalence of the mutation than do whites.92, 93 Given the relatively high prevalence of the slower metabolism of the SN-38 form of CPT-11, it seems possible that the labeled dose of irinotecan (180 mg/m2) was inadvertently designed for patients with impaired ability to detoxify SN-38, and about 67% of patients receiving irinotecan with the UGT1A1 wild type may be underdosed. This hypothesis was supported by Marcuello et al,94 who performed a post FDA-approval phase 1 dose-escalation study of irinotecan in 3 cohorts of patients who were stratified by UGT1A1 genotype (wild type, hetero-, homologous). The group with mutation of both UGT1A1 alleles had an MTD of 150 mg/m2, slightly lower than the labeled dose of irinotecan. In contrast, patients with a wild-type or heterologous mutation of UGT1A1 demonstrated MTD doses of 450 and 390 mg/m2, respectively. Interestingly, the suggestion is made in the current labeling of irinotecan that doses of the drug from the usual dose should be decreased in patients homologous for the *28 polymorphism, when it instead may be more rational to argue for an increase from the usual dose for those patient with a wild-type or heterogeneous U

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call