This study aims to investigate radiologists' interpretation errors when reading dense screening mammograms using a radiomics-based artificial intelligence approach. Thirty-six radiologists from China and Australia read 60 dense mammograms. For each cohort, we identified normal areas that looked suspicious of cancer and the malignant areas containing cancers. Then radiomic features were extracted from these identified areas and random forest models were trained to recognize the areas that were most frequently linked to diagnostic errors within each cohort. The performance of the model and discriminatory power of significant radiomic features were assessed. We found that in the Chinese cohort, the AUC values for predicting false positives were 0.864 (CC) and 0.829 (MLO), while in the Australian cohort, they were 0.652 (CC) and 0.747 (MLO). For false negatives, the AUC values in the Chinese cohort were 0.677 (CC) and 0.673 (MLO), and in the Australian cohort, they were 0.600 (CC) and 0.505 (MLO). In both cohorts, regions with higher Gabor and maximum response filter outputs were more prone to false positives, while areas with significant intensity changes and coarse textures were more likely to yield false negatives. This cohort-based pipeline proves effective in identifying common errors for specific reader cohorts based on image-derived radiomic features. This study demonstrates that radiomics-based AI can effectively identify and predict radiologists' interpretation errors in dense mammograms, with distinct radiomic features linked to false positives and false negatives in Chinese and Australian cohorts.
Read full abstract