Abstract

Multi-modal tumor segmentation exploits complementary information from different modalities to help recognize tumor regions. Known multi-modal segmentation methods mainly have deficiencies in two aspects: First, the adopted multi-modal fusion strategies are built upon well-aligned input images, which are vulnerable to spatial misalignment between modalities (caused by respiratory motions, different scanning parameters, registration errors, etc). Second, the performance of known methods remains subject to the uncertainty of segmentation, which is particularly acute in tumor boundary regions. To tackle these issues, in this paper, we propose a novel multi-modal tumor segmentation method with deformable feature fusion and uncertain region refinement. Concretely, we introduce a deformable aggregation module, which integrates feature alignment and feature aggregation in an ensemble, to reduce inter-modality misalignment and make full use of cross-modal information. Moreover, we devise an uncertain region inpainting module to refine uncertain pixels using neighboring discriminative features. Experiments on two clinical multi-modal tumor datasets demonstrate that our method achieves promising tumor segmentation results and outperforms state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call