Background. Breast background parenchymal enhancement (BPE) is correlated with the risk of breast cancer. BPE level is currently assessed by radiologists in contrast-enhanced mammography (CEM) using 4 classes: minimal, mild, moderate and marked, as described in breast imaging reporting and data system (BI-RADS). However, BPE classification remains subject to intra- and inter-reader variability. Fully automated methods to assess BPE level have already been developed in breast contrast-enhanced MRI (CE-MRI) and have been shown to provide accurate and repeatable BPE level classification. However, to our knowledge, no BPE level classification tool is available in the literature for CEM. Materials and methods. A BPE level classification tool based on deep learning has been trained and optimized on 7012 CEM image pairs (low-energy and recombined images) and evaluated on a dataset of 1013 image pairs. The impact of image resolution, backbone architecture and loss function were analyzed, as well as the influence of lesion presence and type on BPE assessment. The evaluation of the model performance was conducted using different metrics including 4-class balanced accuracy and mean absolute error. The results of the optimized model for a binary classification: minimal/mild versus moderate/marked, were also investigated. Results. The optimized model achieved a 4-class balanced accuracy of 71.5% (95% CI: 71.2–71.9) with 98.8% of classification errors between adjacent classes. For binary classification, the accuracy reached 93.0%. A slight decrease in model accuracy is observed in the presence of lesions, but it is not statistically significant, suggesting that our model is robust to the presence of lesions in the image for a classification task. Visual assessment also confirms that the model is more affected by non-mass enhancements than by mass-like enhancements. Conclusion. The proposed BPE classification tool for CEM achieves similar results than what is published in the literature for CE-MRI.
Read full abstract