Abstract
The number of datasets and computational efficiency are always hindrances in the multi-modal medical image fusion (MMIF) research. To address these challenges, we propose a contrastive learning framework inspired meta-mutual, which divides the medical image fusion task into subtasks and pre-trains an optimal meta-representation suitable for all subtasks. We then fine-tune our proposed network using this optimal meta-representation as initialization, achieving the best model with only a few short datasets. Additionally, extracting source image features in pairs can lead to redundant information due to the invariant and unique features of multi-modal images. Therefore, we introduce novelty mutual contrastive coupled pairs to extract both invariant and unique features from source images. Experimental results demonstrate that our method outperforms other state-of-the-art fusion methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.