Abstract

In survival analyses of longitudinal data, death is often a competing event for the disease of interest, and the time-to-disease onset is interval-censored when the diagnosis is made at intermittent follow-up visits. As a result, the disease status at death is unknown for subjects disease-free at the last visit before death. Standard survival analysis consists in right-censoring the time-to-disease onset at that visit, which may induce an underestimation of the disease incidence. By contrast, an illness-death model for interval-censored data accounts for the probability of developing the disease between that visit and death, and provides a better incidence estimate. However, the two approaches have never been compared for estimating the effect of exposure on disease risk. This paper compares through simulations the accuracy of the effect estimates from a semi-parametric illness-death model for interval-censored data and the standard Cox model. The approaches are also compared for estimating the effects of selected risk factors on the risk of dementia, using the French elderly PAQUID cohort data. The illness-death model provided a more accurate effect estimate of exposures that also affected mortality. The direction and magnitude of the bias from the Cox model depended on the effects of the exposure on disease and death. The application to the PAQUID cohort confirmed the simulation results. If follow-up intervals are wide and the exposure has an impact on death, then the illness-death model for interval-censored data should be preferred to the standard Cox regression analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call