Abstract

Breast cancer is among the top causes of fatalities related to cancer in females. Radiologists commonly use mammogram images to detect breast tumors in their early stages. However, mammography can produce low-contrast images, making it difficult and time-consuming to segment abnormal regions. Deep convolutional neural networks (CNNs) are commonly used for image evaluations. This study used deep CNN models to develop a computer-aided diagnostic (CAD) system for feature extraction and classification. The proposed approach consists of three phases. In the first phase, a shallow, deep CNN model comprising five convolutional layers, five max-pooling layers, one batch normalization layer, and one dropout layer was developed and used to extract recombined images and novel features. In the second phase, the Inception-v3 model was used for label smoothing and classification due to its multiple filters with different sizes. In the third phase, features were extracted using shallow, deep CNN and Inception-v3 models. The Infallible Euclidean distance-based nonlinear dimensionality reduction approach was used to minimize dimensionality. Finally, the Gini-index-based C4.5 decision tree was used for the binary classification of mammogram images from the Digital Database for Screening Mammography (DDSM) + Curated Breast Imaging Subset of DDSM (CBIS-DDSM) and Mammographic Image Analysis Society (MIAS) datasets. The proposed hybrid shallow, deep CNN and Inception-v3 model achieved 99.52% accuracy, a 96% AUC on the DDSM + CBISDDSM dataset, and an accuracy of 97.53% and an AUC value of 97% on the MIAS dataset. Compared with other cutting-edge CAD systems, the proposed hybrid approach achieved higher accuracy by combining in-depth features across both datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call