Causal Duration Analysis Based on Survival Probability Ratio
For causal effects of a binary treatment on a right-censored duration, the widely used proportional hazard contrasts are non-causal with unrealistic restrictions. This article proposes an alternative flexible causal approach, where we estimate the cumulative hazard, not the hazard itself, using an additive or “exponential-additive” specification with freely time-varying parameters. Our approach includes the proportional hazard as a highly special case that allows only monotonic survival probability ratios (SPR’s), while our approach allows any shape of SPR’s. An empirical analysis on recidivism using the duration until re-arrest after prison release on parole/probation is provided, where the SPR trajectory is not monotonic, but has an inverted-U shape over time.
- Research Article
15
- 10.1177/1740774513479522
- Apr 3, 2013
- Clinical Trials
When an outcome of interest in a clinical trial is late-occurring or difficult to obtain, surrogate markers can extract information about the effect of the treatment on the outcome of interest. Understanding associations between the causal effect (CE) of treatment on the outcome and the causal effect of treatment on the surrogate is critical to understanding the value of a surrogate from a clinical perspective. Traditional regression approaches to determine the proportion of the treatment effect explained by surrogate markers suffer from several shortcomings: they can be unstable and can lie outside the 0-1 range. Furthermore, they do not account for the fact that surrogate measures are obtained post randomization, and thus, the surrogate-outcome relationship may be subject to unmeasured confounding. to avoid these problems are of key importance. Methods Frangakis and Rubin suggested assessing the CE within prerandomization 'principal strata' defined by the counterfactual joint distribution of the surrogate marker under the different treatment arms, with the proportion of the overall outcome CE attributable to subjects for whom the treatment affects the proposed surrogate as the key measure of interest. Li et al. developed this 'principal surrogacy' approach for dichotomous markers and outcomes, utilizing Bayesian methods that accommodated nonidentifiability in the model parameters. Because the surrogate marker is typically observed early, outcome data are often missing. Here, we extend Li et al. to accommodate missing data in the observable final outcome under ignorable and nonignorable settings. We also allow for the possibility that missingness has a counterfactual component, a feature that previous literature has not addressed. We apply the proposed methods to a trial of glaucoma control comparing surgery versus medication, where intraocular pressure (IOP) control at 12 months is a surrogate for IOP control at 96 months. We also conduct a series of simulations to consider the impacts of nonignorability, as well as sensitivity to priors and the ability of the decision information criterion (DIC) to choose the correct model when parameters are not fully identified. Because model parameters cannot be fully identified from data, informative priors can introduce nontrivial bias in moderate sample size settings, while more noninformative priors can yield wide credible intervals. Assessing the linkage between CEs of treatment on a surrogate marker and CEs of a treatment on an outcome is important to understanding the value of a marker. These CEs are not fully identifiable; hence, we explore the sensitivity and identifiability aspects of these models and show that relatively weak assumptions can still yield meaningful results.
- Dissertation
- 10.17037/pubs.04657555
- Oct 25, 2020
Randomised trials are viewed as the gold standard for evaluating interventions. Depending on the intervention as well as other logistical factors, individuals or group of individuals may be randomised. The former is known as individual randomised controlled trials (RCTs) and the latter as cluster randomised trials (CRTs). CRTs offer advantages such as administrative convenience and reduction of contamination between trial groups but analysis is more complex than that for RCTs, because of the correlations between participants in the same cluster. When non-adherence to treatment occurs in the sense that some participants do not receive the randomly assigned treatment, confounding may exist as there may be common factors influencing treatment received and outcome. Consequently, the intention-to-treat approach, which compares outcomes between the groups as randomised, assesses the effect of being randomised to treatment rather than the causal treatment effect (effect of receiving the treatment). Ad-hoc methods often used to attempt to estimate the causal effect of treatment received such as per-protocol (PP) and as-treated (AT) approaches are likely to provide biased estimates because the assumptions necessary for those approaches to be unbiased are in general implausible. There exists extensive literature on estimating causal treatment effects from RCTs with non-adherence, but not as much for CRTs. Instrumental variables (IV) methods have the advantage, over other causal methods, of accommodating settings where there are unmeasured confounders when making causal inference. This thesis contributes to the literature on the estimation of causal treatment effects in CRTs where there is non-adherence to treatment and focuses on IV-based methods. I first ascertained the current practice of reporting and addressing nonadherence when causal treatment effects are of interest in CRTs via a systematic review of 123 CRT reports. Non-adherence was reported in about half of the CRTs, of which a third were interested in the causal treatment effect. All of the reviewed CRTs that reported adherence-adjusted estimates performed either PP or AT, without discussing the plausibility of the very strong assumptions necessary for such analyses to result in unbiased causal treatment estimates. No study estimated the local average treatment effect (LATE), that is the average treatment effect on those that would comply with the random allocated treatment, or any other appropriate statistical methods for unbiased causal estimation. In many clinical settings, the relevant causal question is whether treatment has an effect among those who are willing to take it, which would be quantified by the LATE. Hence the thesis focuses on this estimand, starting with an introduction and assessment of the performance of IV-based methods for estimating LATE at either cluster level (CL) or individual level (IL) through simulations under the required identification assumptions for LATE. I also perform sensitivity analyses for IL-LATE estimation and illustrate those methods using two real CRTs. The methods include two-stage least squares (TSLS) based on CL outcome summaries and the Wald estimator with the Schochet-Chiang standard error to estimate CL-LATE, and the Wald estimator, TSLS with robust cluster standard errors, TSLS with Moulton's standard errors and the Bayesian multilevel mixture modelling for estimating ILLATE. I conduct extensive simulations and illustrate the methods using real CRTs data. I demonstrate that TSLS is attractive for the estimation of CL-LATE and IL-LATE but is inefficient. This inefficiency may be reduced through covariate adjustment. The Bayesian multilevel mixture modelling is also attractive due to its flexibility and performs well particularly when non-adherence is at the individual level and the intracluster correlation coefficient for outcome is large. Stata and R codes are provided to facilitate implementation by trial investigators. I conclude by making some recommendations about how to estimate CL-LATE and IL-LATE to improve the quality of analysis when estimating causal treatment effects in the presence of non-adherence in CRTs.
- Research Article
36
- 10.1016/j.jphys.2016.11.007
- Dec 1, 2016
- Journal of Physiotherapy
A multifactorial intervention for frail older people is more than twice as effective among those who are compliant: complier average causal effect analysis of a randomised trial.
- Research Article
1
- 10.1108/jbim-09-2023-0544
- Apr 29, 2025
- Journal of Business & Industrial Marketing
PurposeThis study aims to examine the role of big data marketing capability (BDMC) in shaping firms’ innovation behavior within the context of digital innovation. By defining BDMC and identifying its core dimensions, the study provides a framework for understanding how BDMC moderates the inverted U-shaped relationship between two key types of innovation – explorative and exploitative innovation – and their impact on innovation performance.Design/methodology/approachBDMC is conceptualized through five key dimensions: (1) big data-driven specialized marketing capability, (2) big data-driven customer relationship management (CRM) capability, (3) big data-driven channel and alliance management capability, (4) big data-driven brand management capability and (5) big data-driven market information and knowledge capability. A refined measurement scale for BDMC is developed based on these dimensions. Using hierarchical regression analysis and U-shaped tests, this study investigates how BDMC moderates the nonlinear (inverted U-shaped) relationship between explorative and exploitative innovation and innovation performance. Empirical analysis is conducted using data from 151 firms in the Chinese automotive manufacturing industry.FindingsThe results confirm the distinct effects of explorative and exploitative innovation on innovation performance, with these relationships significantly moderated by BDMC. Under experience-driven marketing capability, explorative innovation exhibits a positive linear effect on performance, while exploitative innovation follows an inverted U-shaped pattern. However, with BDMC, the relationship between explorative innovation and performance shifts to an inverted U-shape, while exploitative innovation transitions from an inverted U-shape to a U-shape, highlighting BDMC’s moderating role.Originality/valueThis study advances the literature by clearly defining BDMC, refining its measurement scale and assessing its moderating influence on innovation strategies. It contributes to the behavioral theory of the firm, the capability-based view and digital innovation theory by positioning BDMC as a pivotal capability that shapes firms’ ability to balance explorative and exploitative innovation. The study provides practical insights for firms undergoing digital transformation, offering a strategic framework for leveraging BDMC to enhance innovation performance.
- Research Article
- 10.1002/sim.70319
- Nov 1, 2025
- Statistics in medicine
The high placebo responses observed in many placebo-controlled randomized clinical trials, particularly in psychiatric research, have hindered the demonstration of treatment efficacy. To address this issue, Fava proposed the Sequential Parallel Comparison Design (SPCD) in 2003, which aims to mitigate the placebo response by estimating a pooled treatment effect (denoted by ). This is achieved by combining the treatment effect observed in the first stage among intention-to-treat (ITT) subjects with the second-stage treatment effect among placebo non-responders through a weighted average approach. However, the challenges in interpreting this pooled treatment effect causally complicate the review of SPCD-designed studies. This paper explored the pooled SPCD treatment effect and contrasted it with two causal estimators: The causal average treatment effect among non-responders (denoted by ) and the causal average treatment effect had all ITT subjects exhibited low responses during the study (denoted by ). These estimators reflect two opposing views of the placebo response, either as an immutable personal trait or as a manipulable feature. Through carefully designed simulation studies, we demonstrated the direction and magnitude of bias when interpreting as either or . In these simulation studies, tends to underestimate the treatment benefit when compared to two causal estimators in most scenarios. Furthermore, , developed to overcome the interpretational limitations of , exhibits statistically superior performance over and in terms of bias and MSE when using the G-formula approach. As such, we recommend its adoption where applicable. The first completed trial using the SPCD design, ADAPT-A, is reanalyzed to further confirm these findings.
- Abstract
- 10.1186/1745-6215-14-s1-o37
- Nov 1, 2013
- Trials
Treatment changes are typical in epilepsy because of the chronic nature of the disease and common problems with treatment, most typically inadequate seizure control and unacceptable adverse effects. Randomised trials in epilepsy often adopt a pragmatic approach to treatment changes, in order to cater for patient needs and mirror the real-life experience of patients. Trial patients may experience multiple treatment changes taking a variety of forms, including addition of alternative treatment(s), switching to other treatment(s) and complete withdrawal from all treatment. Primary analysis of trial data is usually based on the principle of intention to treat (ITT), which ignores such treatment changes and thus avoids any selection bias that may be introduced by changes from randomised treatment. ITT however only allows estimation of the effectiveness of treatment, rather than the true efficacy (or causal effect) of treatment, which is of particular interest to patients and clinicians alike in this setting. Methods such as per protocol or as treated analyses are commonly implemented to estimate causal effects, but are often biased because treatment changes are typically associated with prognostic factors. There exist alternative statistical methods to estimate causal effects of treatment whilst avoiding such bias, such as the rank preserving structural failure time model (RPSTFM) and inverse probability of censoring weighting (IPCW) models, but these methods are not well known and require certain assumptions. I will discuss the challenges of adjusting for treatment changes in epilepsy and demonstrate the RPSTFM and IPCW methods, discussing their advantages and disadvantages in this setting.
- Single Report
2
- 10.1920/wp.cem.2019.6419
- Nov 29, 2019
This paper presents a weighted optimization framework that unifies the binary, multivalued, and continuous treatment—as well as mixture of discrete and continuous treatment—under a unconfounded treatment assignment. With a general loss function, the framework includes the average, quantile, and asymmetric least squares causal effect of treatment as special cases. For this general framework, we first derive the semiparametric efficiency bound for the causal effect of treatment, extending the existing bound results to a wider class of models. We then propose a generalized optimization estimator for the causal effect with weights estimated by solving an expanding set of equations. Under some sufficient conditions, we establish the consistency and asymptotic normality of the proposed estimator of the causal effect and show that the estimator attains the semiparametric efficiency bound, thereby extending the existing literature on efficient estimation of causal effect to a wider class of applications. Finally, we discuss estimation of some causal effect functionals such as the treatment effect curve and the average outcome. To evaluate the finite sample performance of the proposed procedure, we conduct a small‐scale simulation study and find that the proposed estimation has practical value. In an empirical application, we detect a significant causal effect of political advertisements on campaign contributions in the binary treatment model, but not in the continuous treatment model. Causal effect entropy maximization treatment effect semiparametric efficiency sieve method stabilized weights C14 C21
- Research Article
52
- 10.1002/pst.365
- Feb 6, 2009
- Pharmaceutical Statistics
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD.
- Research Article
334
- 10.1037/xge0000920
- Apr 1, 2021
- Journal of Experimental Psychology: General
When the outcome is binary, psychologists often use nonlinear modeling strategies such as logit or probit. These strategies are often neither optimal nor justified when the objective is to estimate causal effects of experimental treatments. Researchers need to take extra steps to convert logit and probit coefficients into interpretable quantities, and when they do, these quantities often remain difficult to understand. Odds ratios, for instance, are described as obscure in many textbooks (e.g., Gelman & Hill, 2006, p. 83). I draw on econometric theory and established statistical findings to demonstrate that linear regression is generally the best strategy to estimate causal effects of treatments on binary outcomes. Linear regression coefficients are directly interpretable in terms of probabilities and, when interaction terms or fixed effects are included, linear regression is safer. I review the Neyman-Rubin causal model, which I use to prove analytically that linear regression yields unbiased estimates of treatment effects on binary outcomes. Then, I run simulations and analyze existing data on 24,191 students from 56 middle schools (Paluck, Shepherd, & Aronow, 2013) to illustrate the effectiveness of linear regression. Based on these grounds, I recommend that psychologists use linear regression to estimate treatment effects on binary outcomes. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
- Book Chapter
1
- 10.1002/9781118445112.stat03692
- Sep 29, 2014
Estimation of the effects of treatments in a randomized clinical trial is usually based on the intention‐to‐treat (ITT) principle. This procedure leads to an estimate of the average causal effect of treatment allocation based on everyone in the trial. When there are departures from randomized allocation (noncompliance), the ITT effect does not measure the impact of the receipt of treatment. The ITT estimate is an attenuated estimate of the effect of the receipt of treatment. We obtain a valid estimate of the latter by estimating the average effect of random allocation (i.e., ITT effect) on outcomes in the subgroup of participants who receive treatment if and only if they are randomized to its receipt (i.e., the compliers)—this is known as the complier‐average causal effect (CACE) of treatment. Noncompliance is often associated with loss to follow‐up. We illustrate simple methods to calculate CACE estimates both with and without the complication of missing outcme data.
- Book Chapter
- 10.1002/9780470061596.risk0612
- Jul 15, 2008
Estimation of the effects of treatments in a randomized clinical trial is usually based on the intention‐to‐treat (ITT) principle. This procedure leads to an estimate of the average causal effect of treatment allocation based on everyone in the trial. When there are departures from randomized allocation (noncompliance), the ITT effect does not measure the impact of the receipt of treatment. The ITT estimate is an attenuated estimate of the effect of the receipt of treatment. We obtain a valid estimate of the latter by estimating the average effect of random allocation (i.e., ITT effect) on outcomes in the subgroup of participants who receive treatment if and only if they are randomized to its receipt (i.e., the compliers)—this is known as thecomplier‐average causal effect(CACE) of treatment. Noncompliance is often associated with loss to follow‐up. We illustrate simple methods to calculate CACE estimates both with and without the complication of missing outcme data.
- Research Article
2
- 10.1002/pst.2141
- May 21, 2021
- Pharmaceutical Statistics
In the meta-analytic surrogate evaluation framework, the trial-level coefficient of determination quantifies the strength of the association between the expected causal treatment effects on the surrogate (S) and the true (T) endpoints. Burzykowski and Buyse supplemented this metric of surrogacy with the surrogate threshold effect (STE), which is defined as the minimum value of the causal treatment effect on S for which the predicted causal treatment effect on T exceeds zero. The STE supplements with a more direct clinically interpretable metric of surrogacy. Alonso et al. proposed to evaluate surrogacy based on the strength of the association between the individual (rather than expected) causal treatment effects on S and T. In the current paper, the individual-level surrogate threshold effect (ISTE) is introduced in the setting where S and T are normally distributed variables. ISTE is defined as the minimum value of the individual causal treatment effect on S for which the lower limit of the prediction interval around the individual causal treatment effect on T exceeds zero. The newly proposed methodology is applied in a case study, and it is illustrated that ISTE has an appealing clinical interpretation. The R package surrogate implements the methodology and a web appendix (supporting information) that details how the analyses can be conducted in practice is provided.
- Research Article
5
- 10.1214/18-aos1795
- Jan 1, 2019
- The Annals of Statistics
In sequential causal inference, two types of causal effects are of practical interest, namely, the causal effect of the treatment regime (called the sequential causal effect) and the blip effect of treatmenton on the potential outcome after the last treatment. The well-known G-formula expresses these causal effects in terms of the standard paramaters. In this article, we obtain a new G-formula that expresses these causal effects in terms of the point observable effects of treatments similar to treatment in the framework of single-point causal inference. Based on the new G-formula, we estimate these causal effects by maximum likelihood via point observable effects with methods extended from single-point causal inference. We are able to increase precision of the estimation without introducing biases by an unsaturated model imposing constraints on the point observable effects. We are also able to reduce the number of point observable effects in the estimation by treatment assignment conditions.
- Research Article
20
- 10.1136/rmdopen-2021-001654
- Jul 1, 2021
- RMD Open
Axial spondyloarthritis (axSpA) is a chronic rheumatic disease characterised by inflammation predominantly involving the spine and the sacroiliac joints. In some patients, axial inflammation leads to irreversible structural damage that...
- Research Article
2
- 10.1177/0962280220971835
- Nov 19, 2020
- Statistical Methods in Medical Research
Confounding is a major concern when using data from observational studies to infer the causal effect of a treatment. Instrumental variables, when available, have been used to construct bound estimates on population average treatment effects when outcomes are binary and unmeasured confounding exists. With continuous outcomes, meaningful bounds are more challenging to obtain because the domain of the outcome is unrestricted. In this paper, we propose to unify the instrumental variable and inverse probability weighting methods, together with suitable assumptions in the context of an observational study, to construct meaningful bounds on causal treatment effects. The contextual assumptions are imposed in terms of the potential outcomes that are partially identified by data. The inverse probability weighting component incorporates a sensitivity parameter to encode the effect of unmeasured confounding. The instrumental variable and inverse probability weighting methods are unified using the principal stratification. By solving the resulting system of estimating equations, we are able to quantify both the causal treatment effect and the sensitivity parameter (i.e. the degree of the unmeasured confounding). We demonstrate our method by analyzing data from the HIV Epidemiology Research Study.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.