Abstract

The science of clinical pharmacology makes tremendous contributions to drug development including an understanding of pharmacokinetics (PK) and pharmacodynamics (PD) and delineation of dose responses for efficacy and safety. Equally important and well documented is the impact that clinical pharmacology studies have on regulatory decisions related to product labels and/or drug approval. Clinical pharmacologists are responsible for identifying dose adjustments necessary to account for changes in PK and/or PD in special populations defined by intrinsic (e.g., age) and extrinsic factors (e.g., food-drug interactions). The most prevalent extrinsic factor studied during drug development is concomitant medications. Clinical, i.e., in vivo, drug-drug interaction (DDI) studies are critical to a drug development program. PK and PD interactions between investigational new small-molecule drugs and co-administered drugs need to be identified early in drug development when there is a high risk of interaction based on mechanistic, metabolism- or transporter-based in vitro screening, prior experience with a drug class, or when pivotal efficacy trials include concomitant drug administration. Most DDI studies are comparative PK assessments of the investigational drug alone and with co-administered medications. Investigational new drugs can act as either cytochrome P450 and/or transporter inhibitors or inducers (i.e., as perpetrators) and influence the exposure to co-administered drugs (i.e., the victims). The tables may be turned, whereby investigational new drugs become the victims and the co-administered drugs act as the perpetrators. It is worth noting that not all DDIs are harmful. For example, ritonavir acts as a PK enhancer by inhibiting both enterocyte CYP3A4 and the efflux transporter P-glycoprotein (P-gp) and thereby improves drug therapy by enabling dose reductions and less frequent dosing of protease inhibitors such as saquinavir.1 Regulatory agencies have done a commendable job of providing guidance to industry on study design, data analysis, and dosing implications of DDI studies. Guidance on the study of drug interactions instinctively seems to be appropriate and rigorous approaches to detecting serious DDIs before an investigational new drug reaches the marketplace. The goals of a stepwise and rigorous mechanistic approach to DDIs are to determine if an investigational drug is a substrate for, and an inhibitor or inducer of, a CYP450 pathway or drug transporter process. These in vitro screening DDI studies, in conjunction with the use of mechanistic and/or dynamic models (e.g., physiologically-based PK models) are utilized to suggest what in vivo DDI studies of the investigational drug are needed to quantitatively evaluate the nature and magnitude of PK and/or PD changes that may warrant dose adjustments in the presence of concomitant medications. Although DDI guidance is similar in the United States, Europe, and Japan, there are differences among the regions in the in vitro criteria that suggest a need for a clinical DDI study. Interestingly, there have been no formal measures of the predictability of in vitro DDI screening programs. Wu et al. conducted a retrospective evaluation of 152 in vivo DDI studies of drug pairs published between 2007 and 2011.2 They found that 28 (18% of) in vivo studies were based on in vitro DDI mechanistic screening and that only 12 studies (8%) demonstrated a statistically significant change in the AUC of the victim drug. The authors showed that the probability of identifying in vivo interactions increased from 8% to 32% when there was an in vitro mechanistic rationale. Ideally, high-quality in vitro DDI studies should correctly identify and qualitatively predict significant in vivo DDIs (100% sensitivity) and correctly identify drug pairs that do not have significant in vivo DDIs (100% specificity). However, it is widely recognized that in vitro DDI screening is not perfect, and the criteria that define a positive or negative finding may vary significantly among laboratories. As a result, false positive in vitro findings can lead to further in vivo DDI studies that are not significant or clinically meaningful, and false negative in vitro results may miss instances of significant or clinically meaningful in vivo DDIs. To our knowledge, there are no studies that have identified the sensitivity and specificity of in vitro DDI studies. There are good reasons to be concerned about false positives and false negatives of in vitro screening. Greenblatt found that a sensitive and specific in vitro predictive paradigm for in vivo DDIs involving CYP450 3A4 has not been achieved based on evidence of marked interlaboratory variability in the determination of the in vitro inhibition constant [Ki] and the inability to measure the true value of a perpetrator drug concentration exposed to the active site of the enzyme [I].3 This is important because guidances from regulatory agencies suggest using the [I]/[Ki] ratio to determine the likelihood of a clinical DDI, and where the value of this ratio is set has an influence on the false positive and false negative rates of clinical DDI studies. We recommend re-examination of the [I]/[Ki] ratio threshold based on accumulated in vitro and in vivo DDI information to explore an optimal balance of the risks of false positive and false negative in vivo findings. The International Transporter Consortium (ITC) reported an 18- to 796-fold interlaboratory variation in in vitro digoxin transport IC50 (inhibitor concentrations achieving 50% of maximal inhibition) and concluded that in vivo DDI studies with digoxin can serve as a digoxin safety study but not as a predictor of P-gp inhibition.4 Fenner et al. collected and analyzed 123 in vivo digoxin DDI studies and found that 97% of these studies had a change in Cmax or AUC that was less than 2-fold in the presence of a P-gp inhibitor, suggesting a high rate of false positives with in vitro screening for P-gp interactions.5 These data raise the question about the return on investment (ROI) of in vitro DDI screening and in vivo DDI studies. It is noteworthy that the average number of in vivo DDI studies increased from 5.5 per new molecular entity (NME) in 2012 to 12 per NME in 2013.6, 7 This increase may possibly be related to the release of the FDA guidance for industry on drug interactions in 2012. The study of DDIs significantly adds to the time and cost of drug development, as well as the cost of using human resources because the majority of clinical DDI studies are conducted in healthy volunteers who do not require treatment with the co-administered drugs. False positive in vitro DDI screening leads to unnecessary in vivo studies or to in vivo studies with negative results. False negative in vitro screening may obviate in vivo DDI studies and provide a false sense of security that ultimately results in serious harm to patients. ROI traditionally measures the amount of return on an investment relative to an investment's cost. In the context of clinical pharmacology studies, determining a financial ROI can be daunting. Recently, a first step was taken to measure the ROI for PK studies in patients with impaired renal function.8 A positive ROI was defined as either a significant change in PK as a result of impaired renal function and/or a change in label dosing recommendations for patients with impaired renal function. Mean Cmax and AUC data were obtained for patients with mild renal impairment (creatinine clearance of 50-80 mL/min) for 277 NMEs approved by the FDA between 2000 and 2012. The study concluded that there was a low ROI because 95% of the NMEs did not experience a significant change in exposure or lead to a decision to modify the dose in the product label. It was suggested that it might be unnecessary in the future to conduct dedicated renal impairment studies in the mild group except in the few cases where the NME has a narrow therapeutic index. The results also implied that negative outcomes of special populations’ PK studies are not informative because they provide no actionable information in the package insert. However, an alternative point of view is that knowing there is no effect of mild renal impairment on PK is useful information in the sense that it is one less thing to worry about in using the drug. We recently presented the results of a survey of in vivo DDI studies in new drug applications (NDAs) for each NME approved by the FDA in 2013.9 These data are available in the public domain at [email protected], and the study is in the process of being submitted for publication. There were 27 NMEs for which a collective total of 246 in vivo DDI studies were conducted during the drug development process. For each NME, we assessed the increase or decrease in Cmax and AUC of the victim drug in the presence or absence of the perpetrator drug and the 90% confidence intervals (CI) about the geometric mean ratio of the observed Cmax and AUC. We defined a positive in vivo DDI study as one in which the Cmax and AUC exposure ratio was not completely within the 90%CI of 80%-125% based on the so-called bioequivalence (BE) interval. The ROI was measured by the percentage of in vivo DDI studies that were positive, although not all of these studies were clinically meaningful. Our results showed only a modest ROI because 57% of all in vivo DDI studies (n = 141) conducted in the 2013 cohort of 27 NMEs were negative in terms of either exposure changes or impact on dosing in the label. However, this ROI compares favorably to the low ROI (95% of studies with negative results) that was found in studies of mild renal impairment. In many cases, even when the magnitude of change in Cmax and AUC was greater than the upper limit of the 90%CI of 125% or less than the lower limit of 80% although statistically significant, the in vivo DDI was deemed not clinically significant. With respect to the 43% of in vivo DDI studies (n = 105) that were positive in our cohort, their labels were modified with actionable information in 2/3 of the cases with dose adjustments, warnings and precautions, contraindications, or recommendations for therapeutic drug monitoring. These results can be viewed in a couple of ways. On one hand, if we assume that the decision to conduct in vivo DDI studies was based on in vitro screening results, then the ROI is not impressive because there would appear to be a high false positive rate based on the fact that 57% of in vivo studies turned out to be negative. On the other hand, we are fully aware that in vivo studies are conducted in the absence of any in vitro rationale in order to support label claims that may be important for marketing or clinical practice reasons. In these cases, the ROI would be artificially reduced because in vivo DDI studies are expected to be negative. In summary, we believe that it is intuitively clear that the pharmaceutical industry and regulatory agencies would benefit from scrutinizing the predictability of in vitro DDI screening studies in terms of the results of corresponding in vivo DDI studies. Like with diagnostic tests, measure of sensitivity and specificity could be calculated for in vitro DDI screening tests. Virtually every NME submitted by industry sponsors to FDA over the past 5 years had extensive in vitro and in vivo DDI data. From the archived data, there are likely to be tremendous lessons learned from the hundreds of in vitro and in vivo DDI studies submitted in NDAs. This evaluation could be similar to the way the Cardiac Safety Research Consortium used an electrocardiogram warehouse and archived data sets from thorough QT (TQT) studies to explore under what circumstances such studies are needed or not.10 We also suggest that this in vitro and in vivo drug interaction database, freely available to industry and FDA, can be used to evaluate the continued practice of using the traditional fixed BE limits of 80% to 125% to define a priori the “no effect” boundaries for in vivo DDIs in the absence of a well-defined PK/PD relationship for the victim drug. This criterion is too stringent given that the objectives of clinical DDI studies, which are exploratory in nature, are inherently different from BE studies that are confirmatory in nature. Furthermore, less stringent boundaries make sense given the relatively small sample size of many in vivo DDI studies, the influence of PK outliers, and the possibility of large inter- and intra-subject variability in Cmax and AUC for either the victim or perpetrator drug. This would be not unlike, for example, using wider comparative BE limits of 70% to 143% with point estimate constraints for Cmax in the study of products that previously exhibited large within-subject PK variability in Cmax or AUC (i.e., highly variable drug products) with a point estimate constraint. Critical-dose drugs (i.e., those with a narrow therapeutic index) would not be suitable for widened CI. There would be many potential advantages in terms of evaluation of ROI of the archived DDI information for recent FDA-approved drugs from [email protected], including identifying information gaps in linking in vitro and in vivo DDI studies to improve our prediction of, in particular, positive in vivo DDIs; reducing the large number of negative in vivo DDI studies that do not significantly inform labeling with actionable language; reducing the regulatory burden of conducting in vivo DDI studies for industry; and, finally, facilitating re-allocation of the costs of in vivo DDI studies to other clinical pharmacology studies, such as better dose finding, that may provide more valuable informational content geared toward maximizing the benefits and minimizing the risks of investigational new drugs. This commentary and research received no specific grant support from any funding commercial or not-for-profit organization. The authors declare no conflicts of interest.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call