Abstract

In administrative database research, misclassification bias can result from diagnostic codes that imperfectly represent the condition being studied. It is unclear how to correct for this bias. Severe renal failure and Colles' fracture status were determined in two distinct cohorts using gold standard methods. True disease prevalence and disease association with other covariables were measured and compared with results when disease status was determined using diagnostic codes. Differences ('misclassification bias') were then adjusted for using two methods: quantitative bias analysis (QBA) with bias parameters (code sensitivity and specificity) of varying accuracy; and disease status imputation using bootstrap methods and disease probability models. Prevalences of severe renal failure (n = 50 074) and Colles' fracture (n = 5680) were 7.5% and 37.0%, respectively. Compared with true values, important bias resulted when diagnostic codes were used to measure disease prevalence and disease-covariable associations. QBA increased bias when population-based (vs strata-specific) bias parameters were used. QBA's ability to account for misclassification bias was most dependent upon deviations in code specificity. Bootstrap imputation accounted for misclassification bias, but this depended on disease model calibration. Extensive bias can result from using inaccurate diagnostic codes to determine disease status. This bias can be addressed with QBA using accurate bias parameter measures, or by bootstrap imputation using well-calibrated disease prediction models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call