Abstract
Abstract Imaging-based machine learning models are promising tools for breast cancer risk prediction. Validating these models across diverse cohorts is necessary to establish performance and spur clinical implementation. We conducted an independent, external validation study of Mirai, a mammography-based deep learning model, using the Chicago Multiethnic Epidemiologic Cohort (ChiMEC), comprising 1671 exams from 704 cases and 4947 exams from 1437 cancer-free controls. We preprocessed images by extracting metadata from mammograms and excluded non-screening exams. Only exams with the four standard mammographic views were included. Images were converted from DICOM to PNG format using the DCMTK library. We computed the area under the receiver-operating characteristic curve (AUC) to evaluate the model’s discriminating capacity for predicting breast cancer within 1-5 years. We analyzed the entire cohort and stratified by race and hormone-receptor (HR) status. Mirai performed well in our study, but the performance is lower than that in the originally published validation of Mirai model. The AUC of the 1-year-risk is 0.72 in our full cohort, which is higher than that of the 5-year-risk (0.65). The 1-year AUC is high in African Americans but decreases over time. In contrast, the model showed lower but time-consistent AUC values in White patients. Performance is slightly better for predicting HR + compared to HR - cancers. Our results suggest that Mirai has better accuracy for predicting short-term breast cancer risk than traditional risk factor-based models, such as the Gail and Tyrer-Cuzick models. This initial evaluation revealed some performance differences by race and HR status and underscores the need for more independent validations in diverse datasets to elucidate the generalizability of image-based deep learning for breast cancer risk prediction. Table 1. Evaluation of performance of Mirai in ChiMEC Cohort Subset Case exams Control exams Harrel's C-index 1-year AUC 2-year AUC 3-year AUC 4-year AUC 5-year AUC Full cohort (MGH) 588 25267 .75 (.72 .78) .84 (.80, .87) 78 (.75, .82) .77 (.74, .80) .76 (.73, .79) .76 (.73, .79) Full cohort (ChiMEC) 1656 4765 .64 (.62, .66) .72 (.68, .75) .67 (.65, .69) .65 (.63, .67) .65 (.64, .67) .65 (.64, .67) African American 829 2174 .64 (.61, .67) .78 (.74, .82) .69 (.65, .72) .66 (.63, .69) .66 (.64, .69) .66 (.63, .68) White 711 1808 .62 (.59, .65) .63 (.57, .68) .65 (.61, .68) .63 (.60, .66) .63 (.61, .66) .64 (.61, .67) Hispanic 20 164 .65 (.45, .86) .63 (.31, .96) .74 (.51, .97) .70 (.51, .89) .67 (.50, .83) .67 (.51, .83) Asian and Native American 80 178 .59 (.49, .70) .67 (.53, .81) .62 (.52, .73) .63 (.54, .72) .62 (.52, .71) .63 (.53, .72) Hormone receptor positive 1281 4765 .65 (.62, .68) .74 (.70, .78) .68 (.66, .71) .66 (.64, .68) .66 (.64, .68 .66 (.64, .68) Hormone receptor negative 300 4765 .62 (.58, .67) .68 (.61, .75) .65 (.60, .70) .63 (.59, .67) .63 (.59, .67) .64 (.60, .67) HER2 positive 139 4765 .62 (.54, .71) .74 (.61, .86) .64 (.56, .72) .63 (.56, .69) .64 (.58, .69) .64 (.58, .69) HER2 negative 1138 4765 .65 (.62, .67) .74 (.70, .78) .68 (.65, .71) .66 (.64, .68) .66 (.64, .68) .66 (.64, .68) Triple negative 207 4765 .61 (.55, .67) .64 (.54, .74) .63 (.57, .69) .62 (.57, .67) .62 (.57, .66) .62 (.58, .67) Citation Format: Olasubomi J. Omoleye, Anna Woodard, Fangyuan Zhao, Maksim Levental, Toshio F. Yoshimatsu, Yonglan Zheng, Olufunmilayo I. Olopade, Dezheng Huo. Independent evaluation and validation of mammography-based breast cancer risk models in a diverse patient cohort [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1933.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.