Abstract

Accurate segmentation of rectal tumors is the most crucial task in determining the stage of rectal cancer and developing suitable therapies. However, complex image backgrounds, irregular edge, and poor contrast hinder the related research. This study presents an attention-based multi-modal fusion module to effectively integrate complementary information from different MRI images and suppress redundancy. In addition, a deep learning-based segmentation model (AF-UNet) is designed to achieve accurate segmentation of rectal tumors. This model takes multi-parametric MRI images as input and effectively integrates the features from different multi-parametric MRI images by embedding the attention fusion module. Finally, three types of MRI images (T2, ADC, DWI) of 250 patients with rectal cancer were collected, with the tumor regions delineated by two oncologists. The experimental results show that the proposed method is superior to the most advanced image segmentation method with a Dice coefficient of [Formula: see text], which is also better than other multi-modal fusion methods. Framework of the AF-UNet. This model takes multi-modal MRI images as input, and integrates complementary information using attention mechanism and suppresses redundancy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call