Abstract

AbstractCervical cancer is one of the most widespread malignancies affecting women’s health worldwide today. However, the task of detection is particularly difficult due to the complex background of the cervical smear, where cells are previously stacked in clusters. To address this problem, we utilize the YOLOv5 as the baseline and build on YOLOv5 by using the simple Transformer Block which only combines with multi-head self attention layers and MLP to better extract cell features as well as to obtain global information. In addition, we allow the model to refine features adaptively to assist with detection by using Convolutional Block Attention Module (CBAM), an attention module being simple and effective for feed-forward convolutional neural networks, in the complex background information. Finally, we compare the model with YOLOv5 as baseline. In CDetector dataset, our model obtains 52.5% mAP@.5, which is 6% better than baseline. In transfer learning, it is 62.2%, which outperforms baseline by 3.2%.KeywordsCervical squamous lesion cellsYOLOv5TransformerDeep learning

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call