Abstract

Data collected in a routine clinical setting are frequently used to compare antiretroviral treatments for human immunodeficiency virus (HIV). Differences in the frequency of measurement of HIV RNA levels and CD4-positive T-lymphocyte cell counts introduce a possible source of bias into estimates of the difference in effectiveness between treatments. The authors investigated the size of this bias when survival analysis methods are used to compare the initial efficacy of antiretroviral regimens. Data sets of clinical markers were simulated by use of differential equations that model the interaction between HIV and human T-cells. Cox proportional hazards and parametric models were fitted to the simulated data sets to evaluate the bias and coverage of 95% confidence intervals for the difference between regimens. The authors' results demonstrate that differences in the frequency of follow-up can substantially bias estimated treatment differences if methods do not correctly account for the intervals between measurements and if the statistical model chosen does not fit the data well. Analyses using methods applicable to interval-censored data reduce the bias. In the Athena cohort of HIV-infected individuals in the Netherlands from 1999 to 2003, there are differences in measurement frequency between current regimens that are of sufficient magnitude to conclude incorrectly that some regimens are more effective than others.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.