Abstract

In this study, mammography images are classified as normal, benign, and malignant using the Mammographic Image Analysis Society (MIAS) and INbreast datasets. After the preprocessing of each image, the processed images are given as input to two different end-to-end deep networks. The first network contains only a Convolutional Neural Network (CNN), while the second network is a hybrid structure that includes both the CNN and Bidirectional Long Short Term Memories (BiLSTM). The classification accuracy obtained using the first and second hybrid architectures is 97.60% and 98.56% for the MIAS dataset, respectively. In addition, experiments performed for the INbreast dataset at the study's end prove the proposed method's effectiveness. These results are comparable to those obtained in previous popular studies. The proposed study contributes to previous studies in terms of preprocessing steps, deep network design, and high diagnostic accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call