Abstract

Verification bias arises in diagnostic test evaluation studies when the results from a first test are verified by a reference test only in a non-representative subsample of the original study subjects. This occurs, for example, when inclusion probabilities for the subsample depend on first-stage results and/or on a covariate related to disease status. Reference standard bias arises when the reference test itself has imperfect sensitivity and specificity, but this information is ignored in the analysis. Reference standard bias typically results in underestimation of the sensitivity and specificity of the test under evaluation, since subjects that are correctly diagnosed by the test can be considered as misdiagnosed owing to the imperfections in the reference standard. In this paper, we describe a Bayesian approach for simultaneously addressing both verification and reference standard bias. Our models consider two types of verification bias, first when subjects are selected for verification based on initial test results alone, and then when selection is based on initial test results and a covariate. We also present a model that adjusts for a third potential bias that arises when tests are analyzed assuming conditional independence between tests, but some dependence exists between the initial test and the reference test. We examine the properties of our models using simulated data, and then apply them to a study of a screening test for dementia, providing bias-adjusted estimates of the sensitivity and specificity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call