Abstract

Estimates of disease prevalence in any host population are complicated by uncertainty in the outcome of diagnostic tests on individuals. In the absence of gold standard diagnostics (tests that give neither false positives nor false negatives), Bayesian latent class inference can be applied to batteries of diagnostic tests, providing posterior estimates of the sensitivity and specificity of each test, alongside posterior estimates of disease prevalence. Here we explore the influence of precision and accuracy of prior information on the precision and accuracy of posterior estimates of these key parameters. Our simulations use three diagnostic tests, yielding eight possible diagnostic outcomes per individual. Seven degrees of freedom allow the estimation of seven parameters: sensitivity and specificity of each test, and disease prevalence. We show that prior precision begets posterior precision but only when priors are accurate. We also show that analyses without gold standard can use imprecise priors as long as they are initialised with accuracy. Imprecise priors risk the divergence of MCMC chains towards inaccurate posterior estimates, if inaccurate initial values are used. We note that inaccurate priors can yield inaccurate and imprecise inference. Bounded priors should certainly not be used unless their accuracy is well established. Inaccurate estimates of sensitivity or specificity can yield wildly inaccurate estimates of disease prevalence. Our analyses are motivated by studies of bovine tuberculosis in a wild badger population.

Highlights

  • Uncertainty lies at the heart of real-world epidemiology

  • We determine whether prevalence is estimated accurately and precisely, using [1] raw test outcomes assuming perfect sensitivity and specificity, [2] models with incorrect, precise priors, [3] models with accurate, precise priors and [4] models with imprecise priors

  • Precise priors can aid identifiability, with often limited information regarding the performance of diagnostic tests we explore how the accuracy of the prior information impacts conclusions regarding disease prevalence using four scenarios; [1] prior specifies lower test A sensitivity (μ = 0.292); [2] prior specifies higher test A sensitivity (μ = 0.692); [3] prior specifies lower test B specificity (μ = 0.736); [4] prior specifies higher test B specificity (μ = 0.999)

Read more

Summary

Introduction

Uncertainty lies at the heart of real-world epidemiology. Imperfect pathogen detection is a common occurrence when sampling live populations, with studies often drawing conclusions from the results of one or more tests, none of which are 100% accurate [e.g., [3]]. This is important because methods for the accurate detection of disease are pivotal to surveillance programmes that focus on the spatial and temporal spread of pathogens within and between populations, with infection prevalence often the primary parameter of interest [4,5,6,7].

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.