Abstract

AbstractIt is of great significance to make full use of the complementary advantages of different modality imaging information for improving the accuracy of tumor segmentation and formulating precise radiotherapy plans. This paper proposed a multi-tasking parallel training method, which combined the attention mechanism of specific tasks to mine the effective information of different modals. It has three parallel learning networks based on parameter sharing, including CT segmentation network, MRI segmentation network, and the joint learning network of similarity measurement between CT and MRI images. CT and MRI segmentation networks learned their specific task features, and used the attention module of specific tasks to enhance the utilization of effective features while learning shared features. The similarity measurement learning network jointly learned the similarity between CT and MRI images, and combined the specific task features shared by CT and MRI segmentation networks to segment multimodal tumor images. Comparing the results of single-modal and multi-modal tumor image segmentation, it is proved that multi-modal segmentation can provide more abundant features and effectively locate the tumor location, especially in the fuzzy adhesion region of the tumor boundary. In addition, other multi-modal image segmentation methods were compared, and the results also prove that the multi-task learning method is suitable for multi-modal image segmentation and has achieved better segmentation results.KeywordsMulti-modalTumor segmentationMulti-task learningSpecific task attentionSimilarity measurement

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call