Abstract

PurposeTo compare the ability of linear mixed models with different random effect distributions to estimate rates of visual field loss in glaucoma patients.MethodsEyes with five or more reliable standard automated perimetry (SAP) tests were identified from the Duke Glaucoma Registry. Mean deviation (MD) values from each visual field and associated timepoints were collected. These data were modeled using ordinary least square (OLS) regression and linear mixed models using the Gaussian, Student's t, or log-gamma (LG) distributions as the prior distribution for random effects. Model fit was compared using the Watanabe–Akaike information criterion (WAIC). Simulated eyes of varying initial disease severity and rates of progression were created to assess the accuracy of each model in predicting the rate of change and likelihood of declaring progression.ResultsA total of 52,900 visual fields from 6558 eyes of 3981 subjects were included. Mean follow-up period was 8.7 ± 4.0 years, with an average of 8.1 ± 3.7 visual fields per eye. The LG model produced the lowest WAIC, demonstrating optimal model fit. In simulations, the LG model declared progression earlier than OLS (P < 0.001) and had the greatest accuracy in predicted slopes (P < 0.001). The Gaussian model significantly underestimated rates of progression among fast and catastrophic progressors.ConclusionsLinear mixed models using the LG distribution outperformed conventional approaches for estimating rates of SAP MD loss in a population with glaucoma.Translational RelevanceUse of the LG distribution in models estimating rates of change among glaucoma patients may improve their accuracy in rapidly identifying progressors at high risk for vision loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call