Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Research Article
  • 10.1080/19466315.2026.2621850
Phase 2 Design Considerations from a Program Perspective
  • Jan 30, 2026
  • Statistics in Biopharmaceutical Research
  • Tobias Mielke + 2 more

One of the key objectives of Phase 2 trials in drug development is de-risking the investment into costly Phase 3 trials. This de-risking introduces added costs, an extended drug-development duration, as well as the likelihood of discontinuing truly efficacious interventions based on chance findings. The choice of Phase 2 sample size and decision rules have, as such, large impact on the value of development programs. Still, Phase 2 designs are frequently guided by rather qualitative discussions on Type 1 error and power only, instead of discussing more directly the financial impact. In this paper, we introduce a model, which allows for the rapid assessment of the expected net present value (eNPV) and allows to optimize Phase 2 sample size and decision rules. Using a limited number of additional assumptions, the model generates helpful insights to ease understanding of program design operating characteristics. We discuss key limitations of designs, which are driven solely by the eNPV optimization, and propose an alternative using constrained optimization to optimize candidate Phase 2 study designs.

  • New
  • Open Access Icon
  • Discussion
  • 10.1080/19466315.2026.2620729
Mainstream Meta-Analysis of Clinical Trials produces strongly Inconsistent Estimators
  • Jan 24, 2026
  • Statistics in Biopharmaceutical Research
  • Jonathan J Shuster

Mainstream random-effects meta-analysis of clinical trials (IVW, weights inversely proportional to the estimated variance), as has been shown in two publications, cannot be trusted. This is especially disturbing as meta-analysis, the combination of like studies to produce an overall estimate of effect size, is at the apex of most evidence pyramids. The asymptotic distribution theory of the mainstream fails because weighted linear combination theory can only be applied to weights that are constants with high degrees of accuracy. In fact, they are certainly volatile random variables. The asymptotic setting requires that the number of studies being combined goes to infinity. Our novel finding is that the coverage of their “95% confidence interval” for overall effect size converges to zero (strong inconsistency). We have conducted reanalysis of about 30 highly influential meta-analysis of clinical trials and found that the mainstream methods had unsupportable qualitative conclusions in about 10%, leading to scientifically unsupportable public health policies. Further, about another 15% had unsupportable quantitative conclusions. The two references offer an asymptotically valid alternative, based on ratio estimation borrowed from survey sampling methods. We also show that if you adjust for the bias of the IVW estimate, you obtain the unpopular equally weighted estimate.

  • New
  • Research Article
  • 10.1080/19466315.2026.2615996
A Randomized Dose-Ranging Basket Trial Design with Enrichment for Dose Optimization
  • Jan 16, 2026
  • Statistics in Biopharmaceutical Research
  • Yeonhee Park + 2 more

Basket trials provide a promising framework for evaluating treatment efficacy across multiple indications, yet many conventional designs fail to account for patient heterogeneity or assume uniform responses across all patients. In this study, we propose a randomized basket trial design with enrichment that addresses these limitations by selectively recruiting treatment-sensitive patients and leveraging a hierarchical Bayesian model (HBM) for dose optimization. Through extensive simulations, we demonstrate that our design outperforms traditional approaches—both fully pooled and independent designs—by improving estimation efficiency without sacrificing indication-specific accuracy. Specifically, enrichment-based strategies (Ind-E, Pool-E, HBM-E) yield more accurate dose selection and higher decision-making power in heterogeneous populations, effectively balancing efficiency and flexibility. The incorporation of HBM allows for adaptive information sharing across indications, further enhancing statistical precision. Overall, our proposed design provides a robust and efficient framework for identifying optimal biologic doses in basket trials, paving the way for more personalized and precise clinical decision-making.

  • New
  • Research Article
  • 10.1080/19466315.2026.2615999
Inverse Probability of Treatment Weighting: A Simple and Effective Approach to Covariate Adjustment for Survival Endpoints in Randomized Clinical Trials
  • Jan 16, 2026
  • Statistics in Biopharmaceutical Research
  • Yongwu Shao + 2 more

Covariate adjustment aims to improve the statistical efficiency of randomized trials by incorporating information from baseline covariates. Popular methods for covariate adjustment include analysis of covariance for continuous endpoints and standardized logistic regression for binary endpoints. For survival endpoints, while some covariate adjustment methods have been developed, they are not commonly used in practice for various reasons, including high demands for theoretical and methodological sophistication as well as computational skills. In this article, we point out that inverse probability of treatment weighting (IPTW) is a simple and effective approach to covariate adjustment for survival endpoints. We provide theoretical results that justify its use in conjunction with the Cox model and the Kaplan-Meier analysis, as well as numerical results that demonstrate its effectiveness in finite samples of small to moderate sizes. Compared to other covariate adjustment methods for survival endpoints, IPTW has several major advantages including simplicity, generality and ease of implementation, and thus holds great promise as a practical approach to covariate adjustment for survival endpoints.

  • New
  • Research Article
  • 10.1080/19466315.2026.2616369
A Bayesian Adaptive Marker-Stratified Design for Phase II Clinical Trials Using Calibrated Spike-and-Slab priors
  • Jan 16, 2026
  • Statistics in Biopharmaceutical Research
  • Mu Shan + 4 more

The marker-stratified design (MSD) is useful for assessing subgroup-specific treatment effects of molecularly targeted agents (MTA). In MSD, patients are first classified into the marker-positive and marker-negative subgroups, and then randomized for MTA or control treatment within each subgroup. The clinical features of the biomarker and treatments used in MSD offer valuable information for treatment evaluation. Specifically, response rates for patients on the control treatment remain similar across different subgroups if the biomarker involved is not prognostic. Additionally, when the MTA is effective, the marker-positive patients generally exhibit significantly higher response rates compared to the marker-negative patients receiving the same MTA. This paper proposes a Bayesian adaptive design (SSS) that uses these clinical features to enhance the efficiency of MSD. The SSS design employs spike-and-slab priors to dynamically borrow information on response rates across different subgroups. The strength of this information borrowing is automatically determined by two posterior probabilities, which measure the similarities in response rates between different subgroups. Furthermore, an extension of the SSS design is proposed to accommodate patients with missing biomarker profiles, utilizing the Bayesian multiple imputation (BMI) method. Simulation studies confirm that the proposed SSS design demonstrates favorable operational characteristics and outperforms the conventional Bayesian designs.

  • New
  • Open Access Icon
  • Research Article
  • 10.1080/19466315.2026.2615998
Statistical Method for Threshold Value Determination of Diagnostic Biomarker in Noisy Data
  • Jan 14, 2026
  • Statistics in Biopharmaceutical Research
  • Yichen Jia + 5 more

Establishment of diagnostic biomarkers of previous disease exposure is essential in precision medicine. One of the challenges in this application is associated with the certainty of the cases being either positive or negative in the training and test analysis sets used to establish a reliable cutoff. Practical situations like asymptomatic cases, non-reported records, absence of doctors’ visits, missing biomarker samples or lack of assay sensitivity at the time of testing may result in subjects being misclassified as negative cases. The uncertain response labels subsequently lead to a biased cutoff value determination since the main assumption of supervised classification methods is that the labels of training and test samples are true with random errors. This paper provides statistical solutions to address the unknown potential labeling issue focusing on two practical aspects: 1. Statistical visualization methods to explore sample responses and so identifying potential mislabeled cases; 2. Application of a semi-supervised learning method, Robust Mixture Discriminant Analysis (RMDA), to responses with uncertain labels for determination of cutoff value; These topics are illustrated using real pediatric serum IgA biomarker dataset for identification of RSV previous exposure status.

  • Discussion
  • 10.1080/19466315.2025.2606333
Considerations on Interim Evaluation of OS in Pivotal Oncology Trials
  • Dec 23, 2025
  • Statistics in Biopharmaceutical Research
  • Weidong Zhang + 8 more

: Overall survival (OS) is considered a gold standard clinical endpoint for evaluating the effectiveness of a drug. In oncology studies, OS is relatively simple and straightforward to measure in a clinical trial. Recent advancements in medical sciences and cancer care have significantly prolonged the lifespan of cancer patients. As a result, it is becoming challenging to measure OS due to its long course in some cancer diseases. In addition, it may be challenging to interpret OS in some circumstances; for example, when patients switch from the control arm to the experiment arm to receive a novel therapy, a detrimental effect might be observed in OS in the experimental arm with uncertainty, or discrepancy between OS and other clinical endpoints can be observed. This manuscript will provide an overview of the challenges of using OS as a clinical endpoint in pivotal oncology trials. Discussions will focus on using OS as both a safety and efficacy endpoint for decision-making.

  • Research Article
  • 10.1080/19466315.2025.2579549
Evaluation of Current Statistical Methods for Implementing Quality Tolerance Limits
  • Dec 23, 2025
  • Statistics in Biopharmaceutical Research
  • Rakhi Kilaru + 9 more

The recently released third draft version of ICH E6(R3) has a great emphasis on Risk-Based Quality Management (RBQM) principles and includes the concept of Quality Tolerance Limits (QTLs) that are regarded as an example of predefined acceptable ranges that, if exceeded, might potentially effect participants safety or the reliability of trial results. This change allows for greater flexibility and adaptability in managing quality and risks in clinical trials, leading to more effective and efficient trials. In this paper, we conduct simulations to evaluate statistical methods, including statistical process control and Bayesian methods, for implementing QTLs in clinical trials. We evaluate the operating characteristics such as average run length, alarm rate, false alarm rate, and other performance metrics. Generally, all methods performed better with larger sample sizes and higher expected probabilities. There was greater variability in performance across methods early in the review cycle when sample sizes were small. Statistical process control methods performed better in most scenarios, while Bayesian methods were more effective at detecting an out-of-control process earlier for lower expected probabilities. Not all scenarios could be investigated; thus, method selection depends on factors like assumptions, statistical complexity, and feasibility.

  • Open Access Icon
  • Research Article
  • 10.1080/19466315.2025.2573322
Comparison of g-Estimation Approaches for Handling Symptomatic Medication at Multiple Timepoints in Alzheimer’s Disease with a Hypothetical Strategy
  • Dec 23, 2025
  • Statistics in Biopharmaceutical Research
  • Florian Lasch + 2 more

For handling intercurrent events in clinical trials, one of the strategies outlined in the ICH E9(R1) addendum targets a hypothetical scenario where an intercurrent event would not occur. While this strategy is often implemented by setting data after the intercurrent event to “missing” even if they have been collected, g-estimation allows for a more efficient estimation by using the information contained in post intercurrent event data. As the g-estimation methods have largely developed outside of randomized clinical trials, optimization for the application in clinical trials are possible. In this article, we describe and investigate the performance of modifications to the established g-estimation methods, leveraging the assumption that some intercurrent events are expected to have the same impact on the outcome regardless of the timing of their occurrence. In a simulation study in Alzheimer’s disease, the modifications show a substantial efficiency advantage for the estimation of an estimand that applies the hypothetical strategy to the use of symptomatic treatment while retaining approximate unbiasedness and adequate Type I error control.

  • Research Article
  • 10.1080/19466315.2025.2579553
Pre-Specified Safety Analysis of OS Data for Trials in Indolent or Early-Stage Cancers
  • Dec 21, 2025
  • Statistics in Biopharmaceutical Research
  • Minghua Shan

Confirmatory cancer clinical trials in chronic or indolent diseases often use imaging endpoints as a primary measure of efficacy due to relatively long survival time. Progression-free survival (PFS) is often such an image-based endpoint. A substantial increase in PFS in the absence of severe toxicities may be considered a meaningful clinical benefit. In these trials, overall survival (OS) is often a secondary or exploratory endpoint with low or unknown statistical power. Due to relatively long OS, few OS events occur at the time of a trial’s primary completion (e.g., the primary PFS analysis). Additionally, unlike the primary endpoint, analyses of OS are often not well planned and described in the protocol. All these make it challenging to interpret OS results in order to determine OS benefit or detriment. However, OS is an ultimate measure of safety as well as efficacy. We present two methods for planning analyses of OS for safety evaluation: a three-outcome and a two-outcome procedure. They can be used to plan OS safety analyses so that sufficient data are available to provide at least a minimum level of information required to rule out a substantial detriment. They also provide guidelines for interpreting OS results.