Abstract

Multi-modal medical image fusion (MMIF) is a valuable approach to integrating functional metabolic information and tissue structural details from different modalities, which facilitates clinical diagnosis and surgical navigation efficiently. In this paper, we propose a collaborative feature representation network based on information exchange called IE-CFRN, which distributes the contribution of each encoder feature at the channel level to achieve more accurate fusion. Furthermore, due to the different emphasis on shallow-level and high-level features, we construct a hierarchical feature enhancement network (HFEN) to integrate significant information on multi-level features. Additionally, to adaptively estimate the contribution coefficient in the fidelity loss for guiding the training process, we introduce a learnable weight estimation network (LWEN). Extensive experimentation on eleven state-of-the-art algorithms shows that our IE-CFRN method yields superior consequences with respect to visual effects, and qualitative and quantitative evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call