Abstract
Abstract Objectives Computable or electronic phenotypes of patient conditions are becoming more commonplace in quality improvement and clinical research. During phenotyping algorithm validation, standard classification performance measures (i.e., sensitivity, specificity, positive predictive value, negative predictive value, and accuracy) are often employed. When validation is performed on a randomly sampled patient population, direct estimates of these measures are valid. However, studies will commonly sample patients conditional on the algorithm result prior to validation, leading to a form of bias known as verification bias. Methods We illustrate validation study sampling design and naïve and bias-corrected validation performance through both a concrete example (1,000 cases, 100 noncases, 1:1 sampling on predicted status) and a more thorough simulation study under varied realistic scenarios. We additionally describe the development of a free web calculator to adjust estimates for people validating phenotyping algorithms. Results In our illustrative example, naïve performance estimates corresponded to 0.942 sensitivity, 0.979 specificity, and 0.960 accuracy; these contrast proper estimates of 0.620 sensitivity, 0.999 specificity, and 0.944 accuracy after adjusting for verification bias using our free calculator. Our simulation results demonstrate increasing positive bias for sensitivity and negative bias for specificity as the disease prevalence approaches zero, with decreasing positive predictive value moderately exacerbating these biases. Conclusion Novel computable phenotypes of patient conditions must account for verification bias when calculating performance measures of the algorithm. The performance measures may vary significantly based on disease prevalence in the source population so use of a free web calculator to adjust these measures is desirable.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have