Abstract

Science & Society2 July 2007free access Thinking inside the box To cope with an increasing disease burden, drug discovery needs biologically relevant and predictive testing systems Lars E. Sundstrom Lars E. Sundstrom Division of Clinical Neurosciences at Southampton University Search for more papers by this author Lars E. Sundstrom Lars E. Sundstrom Division of Clinical Neurosciences at Southampton University Search for more papers by this author Author Information Lars E. Sundstrom1 1Division of Clinical Neurosciences at Southampton University EMBO Reports (2007)8:S40-S43https://doi.org/10.1038/sj.embor.7400939 PDFDownload PDF of article text and main figures. ToolsAdd to favoritesDownload CitationsTrack CitationsPermissions ShareFacebookTwitterLinked InMendeleyWechatReddit Figures & Info In the next few decades, brain medicine will present a particular socioeconomic challenge for ageing citizens worldwide. Many disorders and ailments that affect the brain—including Alzheimer disease, Parkinson disease, dementia and stroke—are chronic conditions that persist for years or even decades. In addition, many of these disorders have devastating effects, which together create a substantial burden on society: about one-third of the global disease burden can be attributed to disorders of the brain or nervous system (Olesen & Leonardi, 2003; Andlin-Sobocki et al, 2005). In the USA alone, there are as many as 5.5 million individuals with Alzheimer disease, 1.5 million with Parkinson disease and 400,000 with multiple sclerosis. These devastating diseases not only place a heavy emotional burden on patients and their care-givers, but also have an important economic impact. …about one-third of the global disease burden can be attributed to disorders of the brain or nervous system Unfortunately, this tremendous socioeconomic problem is worsening, because the risk of suffering from a brain disease increases with age, and life expectancy is increasing worldwide, particularly in developed countries. In the case of Alzheimer disease, for example, ∼10% of US citizens aged over 65 years are affected, the prevalence rate has more than doubled since 1980 and it could increase twofold to threefold by 2050 if the current trend continues (Alzheimer's Association, 2006). Unfortunately, research has had little impact on these statistics. Despite vigorous efforts from both the pharmaceutical industry and biomedical researchers, there are still no disease-modifying treatments available for Alzheimer disease, multiple sclerosis, stroke and a range of other neurological disorders. Some have argued that this problem cannot be solved by standard high-throughput target-based drug discovery methods, because we simply do not know which mechanisms to target. Instead, we should revert to previous empirical models of drug discovery to find new medicines (Lansbury, 2004; Williams, 2004). However, given the current requirements for new drugs to meet regulatory standards and patient expectations, it is unlikely that a straight reversion to the old ‘serendipitous’ methods alone will be any more successful. Solving these problems requires new thinking and a new approach. The origins of modern drug discovery can be traced back to the end of the nineteenth century, which marked the confluence of chemistry and pharmacology. Before that time, the discovery of new medicines relied mainly on serendipity and subsequent observation of the effects of drugs in humans (Drews, 2000). Throughout the past century, modern synthetic chemistry and biotechnology have led to massive increases in the number of new drugs being tested, which in turn have driven the development of alternative methods for determining efficacy and safety. During the past 50 years, laboratory animals have been the first choice for testing drugs before administering them to humans. However, the need for animal testing is challenged both by scientific arguments and by societal expectations, which are driving the search for alternative methods. As a result, drug discovery has become less reliant on observing effects on whole organisms and more focused on discovering how drug candidates interact with their targets in isolated systems. This target-based approach has culminated in the advent of new technologies, such as combinatorial chemistry to facilitate the rapid generation of a huge number of molecules, high-throughput automated screening to test hundreds of thousands of molecules in a single run, and the sequencing of the human genome to reveal potential new drug targets. The vast number of tests that such systems could perform would be unimaginable using laboratory animals. If the success of drug discovery during the past 20 years has been based on a ‘numbers game’ […] it is surprising that it has not been more effective The trend over the past two decades has therefore been to attempt to make drug discovery more predictable using systematic and automated approaches; however, this too has problems (Higgs, 2004). First, high-throughput screening generates far too many molecules to be tested in animals; this ‘functional pharmacology’ bottleneck continues to be a rate-limiting step (Walker et al, 2004). Second, many lead compounds found in this way have failed when tested in whole organisms—if they even make it that far. Third, the system is based on targets, but we do not know most of the targets that are related to a specific disease and several targets are likely to be involved in most cases, making this approach to drug discovery fundamentally flawed. If the success of drug discovery during the past 20 years has been based on a ‘numbers game’—in other words, on how many combinations of compounds could be tested against isolated targets in the shortest possible time—it is surprising that it has not been more effective. As familiar statistics show, the cost of developing a new drug is spiralling upwards, with estimates of as much as US$1 billion to bring a new medicine to the market (DiMasi et al, 2003). However, despite annual investments in research and development of around US$25 billion, the number of new medicines has steadily declined over the past 15 years (Harris, 2002). Although several reasons have been put forward to explain the high failure rate, drugs generally fall short for one of two reasons: they do not work well enough or at all, or they are unsafe and have too many side effects to be administered safely to a large population. This lack of progress in drug development cannot be ignored and will continue to have an influence on our lives—either as patients or as taxpayers—because the disease burden on our ageing societies is increasing with time. A new trend is now emerging, in which the virtues of high-throughput technologies are combined with serendipity, to create a high-throughput biological interface for testing the efficacy and toxicity of new drugs. This method does not have its roots in a target-based approach, but rather tries to find compounds that have effects on biological systems. It is therefore closer to the old ‘black-box’ method of testing potential drugs to see what effects they have on an organism (Fig 1). This approach, which some have termed ‘physiologic’ or ‘functional’ screening (Sundstrom et al, 2005; Sams-Dodd, 2006), promises to deliver high-throughput biology in collaboration with fields such as systems biology and chemical genomics (Giuliano et al, 2006). This lack of progress in drug development cannot be ignored and will continue to have an influence on our lives—either as patients or as taxpayers… Figure 1.Comparison of target-based drug discovery with black-box phenotypic screens. Download figure Download PowerPoint The aim of this approach is to identify drugs that move an organism from one biological state to another, the biological actions of which might not be completely known or understood. One such example is Viagra® (sildenafil), which was discovered by researchers at Pfizer (Sandwich, UK) who were screening for a phosphodiesterase to treat a heart condition when they serendipitously discovered that the drug alleviates erectile dysfunction (Ghofrani et al, 2006). The lesson from this example is that we should study further, rather than ignore, the side effects of drugs (Warne & Page, 2003). In fact, a recent overview found that fewer than 200 distinct ‘druggable domains’ or targets are known to work in humans, and that most of the molecules acting on them function through several different mechanisms (Overington et al, 2006). Because drugs tend to affect phenotype rather than genotype, most of these ‘validated’ drugs are used to treat several different conditions. Given the limited number of known targets and the fact that many efficient drugs obviously work on several targets, the black-box approach might present a viable and promising alternative to the target-based strategy (Fig 2). Figure 2.Black-box systems that can be used for phenotypic screening. There is a trade-off between the biological complexity and relevance of the test systems and the number of compounds that can be tested. At one end of the spectrum, testing could be performed directly in humans; however, this is unlikely to be ethically acceptable as the risks will be too high. Animal models based on mammals are now the most frequently used system, although this is increasingly seen as unethical and sometimes translates poorly to human conditions. Invertebrates have emerged as biological systems that offer many advantages, as drugs can be screened in whole organisms; however, translation to humans still needs to be proven. Computer modelling has recently become popular, but has yet to prove that it can produce new molecules that translate to human efficacy. Simple two-dimensional tissue-culture systems have long been available. Sometimes referred to as high-content systems, they can be generated using primary human and animal cells or immortalized cell lines. The drawback with two-dimensional cell systems is that they do not reflect physiological parameters, and can therefore produce misleading data. More recently, three-dimensional tissues either taken from living mammals or re-engineered in vitro are being developed as the next generation of screening tools. These systems often reflect complexity at the organ level and are referred to as ‘organotypic’. Download figure Download PowerPoint However, a controversial question remains. What is the best system in which to test drugs: humans, animals, stem cells or computers? Society now faces new choices and must consider several ethical and moral issues before beginning to look for efficacious and safe medicines to satisfy patients' demands. Clearly the most relevant species in which to test drugs is Homo sapiens, and some have argued that this is the only system in which to do so. However, much controversy surrounds this topic and the extent to which testing in other animals can predict human responses (Wadman, 2006; Archibald, 2006). In March 2006, the phase I clinical trial of TGN1412, a monoclonal antibody developed by the now defunct company TeGenero (Würzburg, Germany) to treat leukaemia and arthritis, demonstrated the dangers involved. Despite successful safety tests in monkeys, the test in humans failed with disastrous consequences, resulting in the hospitalization of all six volunteers (Goodyear, 2006). Because human testing is potentially dangerous, it can only be applied to late-stage drug candidates; in any case, for logistical reasons, few compounds can be tested in this way. Using non-human mammals for safety testing remains the mainstay of the pharmaceutical industry, but is increasingly criticized as unethical and irrelevant to human disease. It is beyond the scope of this article to debate the ethical aspects of this issue; however, it remains the case that non-human mammalian models are too slow and labour intensive to screen drugs rapidly (Sundstrom et al, 2005). This is not the case with lower species, and a substantial industry is emerging that screens whole organisms with chemical libraries. Drosophila, Caenorhabditis elegans and zebrafish are now commonly used to assess the toxicity and safety of drugs (Doan et al, 2004). Ethically, these species raise fewer objections than mammals and have the advantage of being able to cope with relatively large compound libraries. However, these species are phylogenetically distant from humans and their relevance to human diseases has yet to be determined. …it remains the case that non-human mammalian models are too slow and labour intensive to screen drugs rapidly Computer modelling has also been heralded as a new tool for drug discovery: so-called in silico modelling can be used to test predictions about drug safety and efficacy against known reference data. The nascent research field of systems biology, which aims to describe the phenotype of a system from the interactions of its components, has emerged from this method. In its most recent incarnation, computer modelling of cellular or even organ responses is used to predict biological function. Although this is undoubtedly a powerful new approach, such predictions must still be tested in a biological system. Individual cells and tissue cultures have long been used by industry, and further automation has led to their systematic use in drug screening. However, the complexity of the cell types that are now in use is limited. So far, most systems rely on primary cells derived from animals or humans, or on immortalized cell lines. Although both cell types are able to express various features of human tissue and can be applied in combination with systems biology (Giuliano et al, 2006), the complexity that can be achieved is limited because they usually represent only one cell type and cell–cell interactions do not occur. Recently, another promising area has emerged: in vitro systems that are able to represent the functions of a whole organ. These so-called ‘organotypic’ systems can be created either by removing tissue samples from animal or human donors and culturing them or using them directly, or by generating tissues from stem cells. The first method is limited by the fact that primary tissues must be dissected from the donor organism. Access to sufficient quantities of homogeneous tissue, particularly from humans, poses practical and ethical issues, making it unlikely that this method can be applied realistically with the black-box approach discussed above. The second method, however, holds enormous promise: several sources of stem cells have the potential to generate new tissues for drug testing. For example, tissue-derived stem cells, which are sometimes referred to as ‘adult’ stem cells, have been isolated from many sources, including bone marrow, brain, adipose tissue, nasal epithelium and umbilical cord. Although these will undoubtedly raise fewer objections than their embryonic counterparts from an ethical standpoint, the widespread use of adult stem cells is likely to be curtailed by their limited capacity for self-renewal and plasticity. Embryonic stem cells are now the most promising source of tissue for new testing systems in drug discovery (Gorba & Allsopp, 2003; Pouton & Haynes, 2005). The use of embryonic stem cells to generate tissues for drug screening has many of the attributes of the ‘perfect’ black box: a theoretically unlimited supply of human tissue, which can cope with the requirements of modern screening methods and is amenable to automation. They can even be used to generate human disease models either by taking cells from humans with a genetic condition (such as cystic fibrosis or muscular dystrophy) or by genetically modifying stem-cell lines. The use of embryonic stem cells to generate tissues for drug screening has many of the attributes of the ‘perfect’ black box… However, the use of human embryonic stem cells raises ethical and moral questions, such as which stem cells are best, and how far science should be allowed to go to obtain them. In December 2006, the Roman Catholic Church and French President Jacques Chirac became involved in a controversial debate over the use of funds donated to the French Muscular Dystrophy Association. Although less than 2% of the money raised in their annual telethon went to embryonic stem-cell research, some church officials called for the fundraiser to be boycotted (Sciolino, 2006). Another example is the issue of so-called ‘chimeric’ or hybrid stem-cell systems which combine human and animal cell types (British Broadcasting Corporation, 2007). These might have important advantages for generating specific tissue types for screening new drugs, but they also raise a range of objections with regard to mixing human and animal biological material. It is not the aim of this article to discuss the ethical issues of using embryonic stem cells or chimeric cells for research. Rather, its purpose is to draw attention to the health issues that society faces today, including an increasing disease burden that we might soon be unable to cope with, a pharmaceutical industry that is desperately in need of new testing systems, new cell technologies that hold the promise of creating new medicines and a society that is calling for a reduction in animal testing. In conclusion, there is a clear need to find biologically relevant and predictive test systems for new drugs. The question remains as to which system is best: humans, animals or human stem cells? Biography Lars E. Sundstrom is Chief Scientific Officer of Capsant Neurotechnologies (Southampton, UK) and is in the Division of Clinical Neurosciences at Southampton University. E-mail: [email protected] References Alzheimer's Association (2006) Alzheimer's Disease Statistics. Chicago, IL, USA: Alzheimer's AssociationGoogle Scholar Andlin-Sobocki P, Jonsson B, Wittchen HU, Olesen J (2005) Costs of disorders of the brain in Europe. Eur J Neurol 12: 1–27Wiley Online LibraryPubMedWeb of Science®Google Scholar Archibald K (2006) It's time to test the testers. The Guardian, 5 May. www.guardian.co.ukGoogle Scholar British Broadcasting Corporation (2007) Hybrid embryo work ‘under threat’. BBC News Online, 5 JanGoogle Scholar DiMasi JA, Hansen RW, Grabowski HG (2003) The price of innovation: new estimates of drug development costs. J Health Econ 22: 151–185CrossrefCASPubMedWeb of Science®Google Scholar Doan TN, Eilertson CD, Rubenstein AL (2004) High-throughput target validation in model organisms. Drug Discov Today Targets 3: 191–197CrossrefCASGoogle Scholar Drews J (2000) Drug discovery: a historical perspective. Science 287: 1960–1964CrossrefCASPubMedWeb of Science®Google Scholar Ghofrani HA, Osterloh IH, Grimminger F (2006) Sildenafil: from angina to erectile dysfunction to pulmonary hypertension and beyond. Nat Rev Drug Discov 5: 689–702CrossrefCASPubMedWeb of Science®Google Scholar Giuliano KA, Johnston PA, Gough A, Taylor DL (2006) Systems cell biology based on high-content screening. Meth Enzymol 414: 601–619CrossrefCASPubMedWeb of Science®Google Scholar Goodyear M (2006) Learning from the TGN1412 trial. BMJ 332: 677–678CrossrefPubMedWeb of Science®Google Scholar Gorba T, Allsopp TE (2003) Pharmacological potential of embryonic stem cells. Pharmacol Res 47: 269–278CrossrefCASPubMedWeb of Science®Google Scholar Harris G (2002) Why drug makers are failing in search for new blockbusters. Wall Street Journal, 18 AprGoogle Scholar Higgs G (2004) Molecular genetics: the Emperor's clothes of drug discovery? Drug Discov Today 9: 727–729CrossrefPubMedWeb of Science®Google Scholar Lansbury PT (2004) Back to the future: the ‘old-fashioned’ way to new medications for neurodegeneration. Nat Med 10: S51–S57CrossrefPubMedGoogle Scholar Olesen J, Leonardi M (2003) The burden of brain diseases in Europe. Eur J Neurol 10: 471–477Wiley Online LibraryCASPubMedWeb of Science®Google Scholar Overington JP, Al-Lazikani B, Hopkins AL (2006) How many drug targets are there? Nat Rev Drug Discov 5: 993–996CrossrefCASPubMedWeb of Science®Google Scholar Pouton CW, Haynes JM (2005) Pharmaceutical applications of embryonic stem cells. Adv Drug Deliv Rev 57: 1918–1934CrossrefCASPubMedWeb of Science®Google Scholar Sams-Dodd F (2006) Drug discovery: selecting the optimal approach. Drug Discov Today 11: 465–472CrossrefCASPubMedWeb of Science®Google Scholar Sciolino E (2006) Catholic clergy attack French telethon over stem cell aid. The New York Times, 8 Dec. www.nytimes.comGoogle Scholar Sundstrom L, Morrison B, Bradley M, Pringle A (2005) Organotypic cultures as tools for functional screening in the CNS. Drug Discov Today 10: 993–1000CrossrefCASPubMedWeb of Science®Google Scholar Wadman M (2006) Earlier drug tests on people could be unsafe, critics warn. Nat Med 12: 153CrossrefCASPubMedWeb of Science®Google Scholar Walker MJ, Barrett T, Guppy LJ (2004) Functional pharmacology: the drug discovery bottleneck? Drug Discov Today Targets 3: 208–215CrossrefCASGoogle Scholar Warne P, Page C (2003) Is there a best strategy for drug discovery? Drug News Perspect 16: 177–182PubMedGoogle Scholar Williams M (2004) A return to the fundamentals of drug discovery? Curr Opin Investig Drugs 5: 29–33PubMedWeb of Science®Google Scholar Previous ArticleNext Article Volume 8Issue S11 July 2007In this issue FiguresReferencesRelatedDetailsLoading ...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call