Abstract

Breast cancer mortality can be prevented by only early, accurate mammography screening and diagnosis. Although CNN-based computer-aided diagnosis (CAD) systems for breast cancer have made tremendous progress recently, accurate identification of mammography lesions is still difficult because of poor signal-to-noise ratio (SNR) and physiological features. In this manuscript, an Adaptive Fuzzy C-Means Segmentation and Deep Learning Model for Efficient Mammogram Classification Using VGG-Net (AFCM-DCNN) is proposed. The input image is given to Grey Code Approximation Pre-processing (GCAP) algorithm to enhance the quality of image by adjusting the pixel contrast. The preprocessed image is given to Adaptive Fuzzy C-Means (AFCM) algorithm and is applied in segmenting dominant regions in an input image. But in conventional AFCM technique, the centroid values get generated randomly, which consumes more computational time. Hence to enhance the performance of traditional AFCM, centroid value is optimally chosen by means of optimization algorithm. A technique for classifying images called DCNN analyses the input image and categorizes it as either benign, malignant and normal. The method extracts the features of the image and train with VGG-16 Net classifier. The neurons at the output layer have been designed to compute Class Centric Disease Support (CCDS) towards various classes. Accordingly, the mammogram class is identified towards detecting the brain tumor. The performance of the proposed method AFCM-DCNN exhibits higher accuracy of 29.3%, 25.6% and 24.6%, higher sensitivity of 15.4%, 16.6% compared with the existing methods. Therefore, in future work, hope to enhance the performance depending on transfer learning with similar data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call