Abstract

ABSTRACTIn HIV/AIDS study, the measurements viral load are often highly skewed and left-censored because of a lower detection limit. Furthermore, a terminal event (e.g., death) stops the follow-up process. The time to terminal event may be dependent on the viral load measurements. In this article, we present a joint analysis framework to model the censored longitudinal data with skewness and a terminal event process. The estimation is carried out by adaptive Gaussian quadrature techniques in SAS procedure NLMIXED. The proposed model is evaluated by a simulation study and is applied to the motivating Multicenter AIDS Cohort Study (MACS).

Highlights

  • In many AIDS studies, the infection and progression of human immunodeficiency virus type 1 (HIV-1) are usually measured by viral load and CD4 cell count

  • The Tobit models explicitly incorporate into the likelihood function both the probability that an observation is below limits of detection (LOD) and the probability distribution of an observation given that it is above the LOD

  • All joint models perform significantly better than their reduced model counterparts with smaller −2LogLik, Akaike information criterion (AIC) and Bayesian information criterion (BIC) values, suggesting that the joint models are more preferable than their reduced model counterparts

Read more

Summary

Introduction

In many AIDS studies, the infection and progression of human immunodeficiency virus type 1 (HIV-1) are usually measured by viral load (plasma HIV-1 RNA copies) and CD4 cell count (the number of CD4+ T lymphocytes per volume of blood). Despite the improvement of measurement technology, viral load measurements are still subject to censoring due to limits of detection (LOD), e.g., left censoring due to a lower LOD at 50 copies/ml in ultra sensitive assay (Schockmel et al, 1997). To address this issue of censored longitudinal response variables, a common practice is to impute the censored values by the LOD or some values such as half of LOD. The Tobit models, Su and Luo which assume normal distributions for random errors, usually provide consistent parameter estimates when the normality assumption is satisfied

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.