Abstract

Anchored matching-adjusted indirect comparisons (MAICs) leverage individual patient-level data (IPD) to compare two treatments across separate trials with a common comparator arm. MAICs only require adjustments for effect modifiers, typically identified by expert clinicians a priori. Emerging semi-automated and automated variable selection algorithms may have advantages over clinician rankings in terms of considering effect modifiers on appropriate scale, reproducibility, being data driven, and ability to leverage large datasets. This analysis compared simulated clinician rankings with semi-automated and automated algorithms for variable selection when conducting anchored MAICs. Clinician ranking (60% chance of correct identification, which increased by 10% for every one unit increase in the log effect modifier size), semi-automated high dimensional propensity score [HDPS] algorithm, and four automated algorithms (hierNet, Bayesian projected prediction, Bayesian additive regression trees [BART], and random forest) were evaluated for variable selection in MAIC using 100 simulated trials with two true effect modifiers and eight noise variables. Effect modifiers added were 50% of the size of the treatment effect. Success was measured using an average number of false positives and negatives, and additive absolute bias of missed effect modifiers. Simulated clinician ranking identified one false negative and four false positive variables, with an absolute bias of 0.42. HDPS, hierNet, and Bayesian projected prediction had similar results (1.27, 0.88, and 0.60 false negatives, 1.01, 2.32, and 4.19 false positives, and absolute biases of 0.52, 0.36, and 0.25, respectively). Tree-based algorithms had the lowest absolute bias; BART and random forest had 0.12 and 0.02 false negatives, 2.02 and 1.92 false positives, with absolute biases of 0.018 and 0.003, respectively. Based on the simulations, tree-based algorithms performed best for variable selection in MAIC. These findings suggest that semi-automated and automated variable selection algorithms should also be considered to augment analyses based on clinician rankings when undertaking anchored MAICs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.