Abstract

Precise cervical cancer treatment highly relies on accurate segmentation of cervical tumors from magnetic resonance (MR) images. However, this task is challenged by the inhomogeneous intensity distributions in MR images and the large variations in tumor shapes and locations. The large slice thickness further results in limited inter-slice correlations and 3D contextual information can be utilized, which influences the segmentation performance greatly. To tackle the above challenges and make full use of the 3D contextual information, a multi-view feature attention-based segmentation network (MVFA-Net) is proposed in this study. Toward the weak correlations among adjacent MR slices, features from different views of the volumetric MR images are extracted and treated individually and then fused by a channel-wise attention model. A cervical MR data set collected from 160 cervical cancer patients was employed to evaluate the performance of the proposed MVFA-Net. In comparison experiments, the proposed MVFA-Net outperforms the other eight medical image segmentation networks 2.6%-11.1% and 0.39 mm-0.97 mm on Dice similarity coefficient (DSC) and Average surface distance (ASD), respectively. Extensive ablation studies demonstrate the effectiveness of the proposed multi-view attention block and MVFA-Net. Additionally, with the trained segmentation network, no more than 6 s will be taken to segment one unseen patient, which is highly efficient for clinical practice. The presented segmentation network might be useful for cervical cancer treatment routine to improve the segmentation accuracy, consistency, and efficiency. The code is publicly available at: https://github.com/xyndameinv/MVFA-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call