Abstract

Breast cancer diagnosis from mammographic scans is a complex challenge, demanding accurate and efficient methodologies. Existing approaches often rely on time-consuming manual feature extraction or lack scalability when employed with diverse clinical image sets. To address these limitations, this paper presents an innovative solution: the MPC2EL (Multiple Phased Convolution model for Pre-emption of Breast Cancer via Extensive Learning Operations). The core problem tackled by this research is the need for improved breast cancer detection, especially concerning scalability, accuracy, and practicality. To achieve this, MPC2EL leverages extensive learning operations and multiple phased convolutions to process extensive clinical datasets containing both cancerous and benign images. Our novel approach begins with the conversion of these images into multimodal feature sets, encompassing frequency, cosine, wavelet, and Gabor domains. These feature sets enable enhanced classification of images into different cancer stages. The proposed methodology lies in a Dual 1D Convolutional Neural Network (D1D CNN) architecture, capable of categorizing images initially into malignant and benign types and subsequently refining classification for malignant images. Experimental results reveal a substantial 5.9% increase in classification accuracy and a 4.5% improvement in recall compared to conventional deep learning methods. Moreover, the model demonstrates lower processing delay and higher precision, making it a promising candidate for real-world clinical deployment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call