Abstract

ABSTRACT Nowadays, deep neural networks (DNNs) are helpful tools for mammogram classification in breast cancer screening. But, in Vietnam, there is a relatively small number of mammograms for training DNNs. Therefore, this study aims to apply transfer learning techniques to improve the performance of DNN models. In the first step, 10,418 breast cancer images from the Digital Database for Screening Mammography were used for training the CNN model ResNet 34. In the second step, we fine-tune this model on the Hanoi Medical University (HMU) database with 6,248 Vietnamese mammograms. The optimal model of ResNet 34 among these models achieves a macAUC of 0.766 in classifying breast cancer X-ray images into three Breast Imaging-Reporting and Data System (BI-RADS) categories, BI-RADS 045 (‘incomplete and malignance’), BI-RADS 1 (‘normal’), and BI-RADS 23 (‘benign’), when tested on the test dataset. This result is higher than the result of the ResNet 50 model trained only on the X-ray dataset of 7,912 breast cancer images of the HMU dataset, which achieves a macAUC of 0.754. A comparison of the performance of the proposed model of ResNet 34 applying transfer learning with other works shows that our model’s evaluation results are higher than those of the compared models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call