Abstract
Deep learning, especially Convolution Neural Networks (CNNs), has demonstrated superior performance in image recognition and classification tasks. They make complex pattern recognition possible by extracting image features through layers of abstraction. However, despite the excellent performance of deep learning in general image classification, its limitations are becoming apparent in specific domains such as cervical cell medical image classification. This is because although the morphology of cervical cells varies between normal, diseased and cancerous, these differences are sometimes very small and difficult to capture. To solve this problem, we propose a two-stream feature fusion model comprising a manual feature branch, a deep feature branch, and a decision fusion module. Specifically, We process cervical cells through a modified DarkNet backbone network to extract deep features. In order to enhance the learning of deep features, we have devised scale convolution blocks to substitute the original convolution, termed Basic convolution blocks. The manual feature branch comprises a range of traditional features and is linked to a multilayer perceptron. Additionally, we design three decision feature channels trained from both manual and deep features to enhance the model performance in cervical cell classification. Our proposed model demonstrates superior performance when compared to state-of-the-art cervical cell classification models. We establish a 15-category 148762 cervical cytopathology image dataset (CCID). In addition, we additionally conducted experiments on the SIPaKMeD dataset. Numerous experiments show that our proposed model performs excellently compared to state-of-the-art classification models. The outcomes illustrate that our approach can significantly aid pathologists in accurately evaluating cervical smears.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have