Abstract

Although ultrasound has become an important screening tool for the non-invasive diagnosis of breast cancer, it is limited by intra- and inter-observer variability, and subjectivity in diagnosis. On the other hand, deep learning-based approaches have the potential for objective and automated diagnosis in a manner that is efficient and reproducible. In this study, we propose a deep learning methodology for the classification of benign and malignant breast lesions based on combined ultrasound B-mode and Nakagami images. We hypothesize that combining the images, which contain complementary information, will provide better classification performance in a deep learning framework than using the images by themselves. The study included 230 patients who had 152 benign and 78 malignant masses. Nakagami images were formed using a sliding window applied to the envelope data of each patient. A superposition approach was adopted to form fused images, where Nakagami images and B-mode images were superimposed onto each other at differing weights. A modified VGG-16 network was trained on the resulting images, and performance was evaluated on a separate test dataset containing 50 images. Models trained using fused images outperformed models trained using individual B-mode and Nakagami images. Furthermore, the AVCs obtained by models trained on fused images were found to be statistically significantly higher than models trained on individual images. The obtained results demonstrate the feasibility of combining information from Nakagami and B-mode images, and its potential to provide improved diagnosis for breast cancer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call