Abstract

Cervical cancer is the fourth most common cancer in women, and its precise detection plays a critical role in disease treatment and prognosis prediction. Fluorodeoxyglucose positron emission tomography and computed tomography, i.e., FDG-PET/CT and PET/CT, have established roles with superior sensitivity and specificity in most cancer imaging applications. However, a typical FDG-PET/CT analysis involves the time-consuming process of interpreting hundreds of images, and the intense image screening work has greatly hindered clinicians. We propose a computer-aided deep learning-based framework to detect cervical cancer using multimodal medical images to increase the efficiency of clinical diagnosis. This framework has three components: image registration, multimodal image fusion, and lesion object detection. Compared to traditional approaches, our adaptive image fusion method fuses multimodal medical images. We discuss the performance of deep learning in each modality, and we conduct extensive experiments to compare the performance of different image fusion methods with some state-of-the-art (SOTA) object-detection deep learning-based methods in images with different modalities. Compared with PET, which has the highest recognition accuracy in single-modality images, the recognition accuracy of our proposed method on multiple object detection models is improved by an average of 6.06%. And compared with the best results of other multimodal fusion methods, our results have an average improvement of 8.9%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call