In this work, convolutional neural network model was developed to read mammography image scans to predict malignancy. Firstly, mammography image data was sourced locally from University of Abuja Teaching hospital, Gwagwalada, Nigeria and then combined with publicly available mammography image dataset from The Mammographic Image Analysis Society (MIAS) database. The data obtained were then preprocessed using Contrast Limited Adaptive Histogram Equalization (CLAHE), Image Denoising, Padding and Formatting. Transfer learning was implemented by re-architecting pre-trained VGG-16, VGG-19, MobileNet-V2 and DenseNet-121 models to have an output layer with two neurons and softmax activation function, each signifying the degree to which calcifications in the mammography image is benign or malignant. The model was then trained on the preprocessed data and evaluated, the following metrics were achieved: VGG-16 and VGG-19 based models achieved an accuracy of 86% each, precision of 87% and 88% respectively and recall of 92% and 93% respectively. MobileNet-V2 has accuracy of 90%, precision and recall of 93% while the DenseNet-121 model achieved an accuracy of 95%, precision of 93% and 100% score in recall. MobileNet-V2 performed best in the computational complexity analysis with 336.34 MFLOPs, followed by DenseNet-121 with 5.69GFLOPs. VGG-16 and VGG-19 have computational complexity of 15.3 GFLOPs and 19.6 GFLOPs respectively.
Read full abstract