Abstract

There is a critical requirement for detecting cervical lesions through colposcopic images in computer-aided diagnosis. Compared to images from natural scenes, colposcopic images have some specific problems, such as low contrast, high visual similarity, and blurry lesion boundaries that make it difficult to accurately detect cervical lesion areas. To solve these problems, this paper proposes a method based on RetinaNet to detect lesion areas in colposcopic images. First, the depth features of the entire image are extracted by a fusion of ResNet50 and a feature pyramid network (FPN). In addition, the model suppresses the weight of simple and easy-to-distinguish samples through local loss, ensuring that the training can focus on the hard-to-distinguish and that important samples, while improving the utilization rate of the important features. Then, object classification and bounding box regression are performed on the feature map through two subnets. Under the same experimental conditions, the detection effects of this method are compared with those of other mainstream models through the mean average precision (mAP), average recall (AR) and other indexes. Experimental results show that the method based on RetinaNet is superior to these compared models, with a mAP[.5:.95] of 32.72%, a mAP.5 of 50.16%, and an AR of 49.70%. Compared with those of Faster R-CNN-ResNet50 + FPN, the mAP[.5:.95] is 2.76% higher and the AR is 6.42% higher.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call