Abstract
BackgroundImperfect diagnostic testing reduces the power to detect significant predictors in classical cross-sectional studies. Assuming that the misclassification in diagnosis is random this can be dealt with by increasing the sample size of a study. However, the effects of imperfect tests in longitudinal data analyses are not as straightforward to anticipate, especially if the outcome of the test influences behaviour. The aim of this paper is to investigate the impact of imperfect test sensitivity on the determination of predictor variables in a longitudinal study.Methodology/Principal FindingsTo deal with imperfect test sensitivity affecting the response variable, we transformed the observed response variable into a set of possible temporal patterns of true disease status, whose prior probability was a function of the test sensitivity. We fitted a Bayesian discrete time survival model using an MCMC algorithm that treats the true response patterns as unknown parameters in the model. We applied our approach to epidemiological data of bovine tuberculosis outbreaks in England and investigated the effect of reduced test sensitivity in the determination of risk factors for the disease. We found that reduced test sensitivity led to changes to the collection of risk factors associated with the probability of an outbreak that were chosen in the ‘best’ model and to an increase in the uncertainty surrounding the parameter estimates for a model with a fixed set of risk factors that were associated with the response variable.Conclusions/SignificanceWe propose a novel algorithm to fit discrete survival models for longitudinal data where values of the response variable are uncertain. When analysing longitudinal data, uncertainty surrounding the response variable will affect the significance of the predictors and should therefore be accounted for either at the design stage by increasing the sample size or at the post analysis stage by conducting appropriate sensitivity analyses.
Highlights
The estimation of disease incidence and prevalence, and the identification of potential risk factors associated with a disease are hampered by imperfect diagnostic tests
The methods proposed can be used in many infectious disease scenarios but here we focus on modelling of risk factors for bovine tuberculosis in Great Britain using a subset of data from the Randomised Badger Culling Trial (RBCT) [5]
Using data collected in one area of the Randomised Badger Control Trial (RBCT) where proactive culling of badgers occurred, we tested the effect of varying the bovine tuberculosis test sensitivity from 50 to 100% on the identification of risk factors for bTB herd breakdown (HBD)
Summary
The estimation of disease incidence and prevalence, and the identification of potential risk factors associated with a disease are hampered by imperfect diagnostic tests. The methods proposed to correct for imperfect testing have generally been based on sensitivity analyses and produce adjusted prevalence estimates for specific scenarios. This is a valid approach for cross-sectional studies, but ignores the implications that the test result of an individual subject (or unit) might affect the testing regime and the subsequent tests performed on the same subject/unit in a longitudinal setting. The aim of this paper is to investigate the impact of imperfect test sensitivity on the determination of predictor variables in a longitudinal study
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.