Abstract

Cervical cancer is a common type of tumor that occurs in the cervix. The cervical cells in the cervix contain millions of cells with various orientations and overlaps. It is an extensive process to segment and annotate the cytoplasm and nuclei from the unsegmented cell images for better classification. In this paper, we propose an automated computerized system to classify unsegmented cervical cell images, which is achieved by using convolutional neural networks (CNN) and vision transformer (ViT) models. CNN automatically learns the spatial hierarchy of features, improving medical image classification. ViT captures long-range dependencies in extensive image recognition applications with a sophisticated encoder and global self-attention mechanisms. A novel cervix feature fusion method (CFF) that fuses the features of the pre-trained DenseNet201 and vision transformer: shifted patch tokenization (SPT) and locality self-attention (LSA) models. This fusion helps to get both local and global features from the cervical cell images. The fuzzy feature selection (FFS) method is used to select discriminative features from the fused feature vector for better classification of the cell abnormalities. The proposed method uses unsegmented cervical cell images from the publicly available SIPaKMeD dataset. The accuracy of the proposed model achieved 96.13% greater accuracy than the state-of-the-art methods despite having a smaller dataset for unsegmented cervical cell images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call