Abstract

Multi-modal tumor segmentation exploits complementary information from different modalities to help recognize tumor regions. Known multi-modal segmentation methods mainly have deficiencies in two aspects: First, the adopted multi-modal fusion strategies are built upon well-aligned input images, which are vulnerable to spatial misalignment between modalities (caused by respiratory motions, different scanning parameters, registration errors, etc). Second, the performance of known methods remains subject to the uncertainty of segmentation, which is particularly acute in tumor boundary regions. To tackle these issues, in this paper, we propose a novel multi-modal tumor segmentation method with deformable feature fusion and uncertain region refinement. Concretely, we introduce a deformable aggregation module, which integrates feature alignment and feature aggregation in an ensemble, to reduce inter-modality misalignment and make full use of cross-modal information. Moreover, we devise an uncertain region inpainting module to refine uncertain pixels using neighboring discriminative features. Experiments on two clinical multi-modal tumor datasets demonstrate that our method achieves promising tumor segmentation results and outperforms state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.