Abstract

Reliability learning and interpretable decision-making are crucial for multi-modality medical image segmentation. Although many works have attempted multi-modality medical image segmentation, they rarely explore how much reliability is provided by each modality for segmentation. Moreover, the existing approach of decision-making such as the softmax function lacks the interpretability for multi-modality fusion. In this study, we proposed a novel approach named contextual discounted evidential network (CDE-Net) for reliability learning and interpretable decision-making under multi-modality medical image segmentation. Specifically, the CDE-Net first models the semantic evidence by uncertainty measurement using the proposed evidential decision-making module. Then, it leverages the contextual discounted fusion layer to learn the reliability provided by each modality. Finally, a multi-level loss function is deployed for the optimization of evidence modeling and reliability learning. Moreover, this study elaborates on the framework interpretability by discussing the consistency between pixel attribution maps and the learned reliability coefficients. Extensive experiments are conducted on both multi-modality brain and liver datasets. The CDE-Net gains high performance with an average Dice score of 0.914 for brain tumor segmentation and 0.913 for liver tumor segmentation, which proves CDE-Net has great potential to facilitate the interpretation of artificial intelligence-based multi-modality medical image fusion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.