Abstract

In the automated screening of cervical cancer using raw micro scoping images, the segmentation of individual cells is time taking and error prone. Therefore, extracting individual cells automatically during inference time is not possible. The automated classification models based on the extraction of hand-crafted features such as texture, morphology, etc. are not very accurate always. In this article, a transfer learning based deep EfficientNet model is used for screening of cervical cancer without any segmentation for reducing manual errors and improving time constraints. Initially, the EfficientNet is trained on the dataset containing original images from the ImageNet dataset and then on the dataset containing the cervical cell microscopic images that consists of different categories of cervix cells. This method has been evaluated on the Herlev Pap smear dataset. Instead of working on individual cells like previous methods, we used to work on images having multiple cells. The number of cells inferred per unit time drastically increases. Results show that EfficientNet model performs well in classification, when applied to the Herlev benchmark Pap smear dataset and evaluated using ten-fold cross-validation. The used model is promising in terms of the time required for the inference as compared to other methods. The performance comparison with other models shows that the accuracy and other scores obtained by EfficientNet is improved with reduction processing time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call