After years of enormous research efforts for the systematic cataloguing of genetic alterations with causative function in cancer, the goal of “personalized medicine” in clinical oncology is now potentially in reach (1). With the overall aim to characterize more than 25,000 genomes from the 50 more relevant cancer types, major international endeavors are ongoing to provide a complete inventory of oncogenic mutations (2). When combined with the massive capacity of modern pharmaceutical companies to screen for inhibitors that target “druggable” mutant gene products, this undertaking will offer unprecedented opportunities to treat cancer through “precision” approaches whereby therapeutic decisions are informed by the genomic makeup of each tumor in each patient. Nevertheless, the ultimate clinical implementation of personalized medicine in oncology is still a major challenge. Indeed, the categorization of molecularly circumscribed tumor subpopulations featuring specific genetic lesions, the validation of such lesions as therapeutic targets, and the definition of biomarkers for accurate prediction of sensitivity to rational treatments face technical, logistic, and ethical limitations in patients. If cancer therapies must be tailored around small, genetically defined patient subgroups, the efforts essential to identify, recruit, and treat a number of patients great enough to validate the therapeutic relevance of new targets must be massive and may hardly justify high-risk drug development strategies. Highly reliable preclinical models for discrimination between “actionable” therapeutic opportunities and those with weak clinical transferability are thus urgently needed to improve the bench-to-bedside pipeline as a means to systematically increase the success rate of rationally based clinical trials. Further pointing to this criticality, the comprehensive and informed review by Lieu and colleagues in this issue of the Journal (3) depicts a different scenario, in which bedside-to-bench approaches have been clearly dominating the drug development scene in the last decade. Most of the genetic biomarkers that are currently used to predict drug efficacy in patients have been originally identified by retrospective clinical studies, and only subsequently mechanistically validated using preclinical models. There are many prototypic examples of this, ranging from the observation that patients with EGFR-mutant or ALK-translocated lung cancer respond to blockade of the corresponding targets, to the demonstration that mutations in the KRAS gene correlate with lack of benefit from anti-EGFR antibodies in colorectal cancer (3). By robustly validating previously identified biomarkers of response and resistance to drugs, a number of studies—both in cell lines (4,5) and patient-derived xenografts (6)—have demonstrated the potential of preclinical models as tools to generate clinically relevant predictions. However, the fact that these approaches have infrequently translated into reliable instruments for definition of successful trial designs reinforces the notion that preclinical methodology should be renewed and adapted to meet the current needs of translational research. This in turn could reinstate preclinical research to play a central role in defining priorities and strategies for the clinical development of drug candidates, hopefully contributing to overcoming the actual hitches in the field. First, a consensus is needed to define unequivocally what should be deemed a successful endpoint at the preclinical level. If we agreed to raise the bar to more stringent criteria for the evaluation of treatment efficacy at the preclinical level, this would probably reduce the attrition of hypotheses during the clinical phases of experimentation. For example, as clearly stated by Lieu et al., there is growing evidence that predictions of drug sensitivity based on tumor regressions in vivo—especially when observed in patientderived xenografts—are indeed more robust indicators of clinical transferability than those based on the more widely used criteria of tumor growth inhibition. However, although it is probably easy to reach a general agreement about these assumptions when targeting the cell-autonomous properties of cancer in vivo (mainly based on the concept of oncogene addiction), the situation is much less obvious when dealing with the tumor microenvironment. Within this context, the artifacts introduced by using nonhuman hosts are certainly more pronounced, and the definition of unquestionable, clinically relevant endpoints is not trivial. Even more complicated, the results of in vitro assays aiming at testing drug sensitivity in cancer cell lines can be barely correlated to direct measures of clinical efficacy. Thus, while remaining an invaluable resource for high-throughput screening approaches, hypothesis generation, and mechanistic investigation, cell lines should be considered mainly to be prioritization tools. In this view, cell line–based screens should be oriented at selecting promising options that deserve independent evaluation through in vivo approaches (ideally patient-derived xenograft based), which, albeit more laborious, are likely more interpretable in terms of translational implications. This will allow for the definition of clear-cut standards of preclinical activity, ideally based on objective endpoints that will demonstrate statistical correlation with clinical efficacy. In turn, consolidated preclinical knowledge will help rationalize risk assessment–based drug development policies for informed go/no-go decisions.
Read full abstract