Abstract

The Institute of Medicine recommended conducting observational studies of childhood immunization schedule safety. Such studies could be biased by outcome misclassification, leading to incorrect inferences. Using simulations, we evaluated (1) outcome positive predictive values (PPVs) as indicators of bias of an exposure-outcome association, and (2) quantitative bias analyses (QBA) for bias correction. Simulations were conducted based on proposed or ongoing Vaccine Safety Datalink studies. We simulated 4 studies of 2 exposure groups (children with no vaccines or on alternative schedules) and 2 baseline outcome levels (100 and 1000/100000 person-years), with 3 relative risk (RR) levels (RR=0.50, 1.00, and 2.00), across 1000 replications using probabilistic modeling. We quantified bias from non-differential and differential outcome misclassification, based on levels previously measured in database research (sensitivity>95%; specificity>99%). We calculated median outcome PPVs, median observed RRs, Type 1 error, and bias-corrected RRs following QBA. We observed PPVs from 34% to 98%. With non-differential misclassification and true RR=2.00, median bias was toward the null, with severe bias (median observed RR=1.33) with PPV=34% and modest bias (median observed RR=1.83) with PPV=83%. With differential misclassification, PPVs did not reflect median bias, and there was Type 1 error of 100% with PPV=90%. QBA was generally effective in correcting misclassification bias. In immunization schedule studies, outcome misclassification may be non-differential or differential to exposure. Overall outcome PPVs do not reflect the distribution of false positives by exposure and are poor indicators of bias in individual studies. Our results support QBA for immunization schedule safety research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call