Abstract

Patient data is fragmented across multiple repositories, yielding suboptimal and costly care. Record linkage algorithms are widely accepted solutions for improving completeness of patient records. However, studies often fail to fully describe their linkage techniques. Further, while many frameworks evaluate record linkage methods, few focus on producing gold standard datasets. This highlights a need to assess these frameworks and their real-world performance. We use real-world datasets and expand upon previous frameworks to evaluate a consistent approach to the manual review of gold standard datasets and measure its impact on algorithm performance. We applied the framework, which includes elements for data description, reviewer training and adjudication, and software and reviewer descriptions, to four datasets. Record-pairs were formed and 15,000 records were randomly sampled from these pairs. After training, two reviewers determined match status for each record-pair. If reviewers disagreed, a third reviewer was used for final adjudication. Between the four datasets, the percent discordant rate ranged from 1.8-13.6%. While reviewers' discordance rate typically ranged between 1% and 5%, one exhibited a 59% discordance rate, showing the importance of the third reviewer. The original analysis was compared to three sensitivity analyses. The original analysis most often exhibited the highest predictive values compared to the sensitivity analyses. Reviewers vary in their assessment of a gold standard, which can lead to variances in estimates for matching performance. Our analysis demonstrates how a multi-reviewer process can be applied to create gold standards, identify reviewer discrepancies, and evaluate algorithm performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call