Abstract

In this work, a new framework for breast cancer image segmentation and classification is proposed. Different models including InceptionV3, DenseNet121, ResNet50, VGG16 and MobileNetV2 models, are applied to classify Mammographic Image Analysis Society (MIAS), Digital Database for Screening Mammography (DDSM) and the Curated Breast Imaging Subset of DDSM (CBIS-DDSM) into benign and malignant. Moreover, the trained modified U-Net model is utilized to segment breast area from the mammogram images. This method will aid as a radiologist's assistant in early detection and improve the efficiency of our system. The Cranio Caudal (CC) vision and Mediolateral Oblique (MLO) view are widely used for the identification and diagnosis of breast cancer. The accuracy of breast cancer diagnosis will be improved as the number of views is increased. Our proposed frame work is based on MLO view and CC view to enhance the system performance. In addition, the lack of tagged data is a big challenge. Transfer learning and data augmentation are applied to overcome this problem. Three mammographic datasets; MIAS, DDSM and CBIS-DDSM, are utilized in our evaluation. End-to-end fully convolutional neural networks (CNNs) are introduced in this paper. The proposed technique of applying data augmentation with modified U-Net model and InceptionV3 achieves the best result, specifically with the DDSM dataset. This achieves 98.87% accuracy, 98.88% area under the curve (AUC), 98.98% sensitivity, 98.79% precision, 97.99% F1 score, and a computational time of 1.2134 s on DDSM datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call