Abstract
Medical imaging plays a pivotal role in the clinical diagnosis of brain disease. There are many imaging methods to detect the state of tissues in the brain. While these imaging methods have advantages, they also have shortcomings. For example, magnetic resonance imaging (MRI) contains structural information but no functional characteristics of tissue, while positron emission tomography (PET) possesses functional characteristics but no structural information. The attention mechanism has been widely used in image fusion tasks, such as fusion of infrared and visible images and medical images. However, those attention models lack a balance mechanism for multimodal image features, affecting the final fusion performance. This paper proposes an end-to-end multimodal brain image fusion framework, MMI-fuse. Specifically, we first apply an autoencoder to extract the features of source images. Then, an information preservation weighted channel spatial attention model (ICS) is proposed to fuse the image features. We set an adaptive weight according to the information preservation degree of features. Finally, we use a decoder model to restructure the fused medical image. The proposed method increased the quality of fused images and decreased the fusion time effectively by the help of the improved attention model and encoder-decoder structure. To validate the performance of the proposed method, we collected 1590 pairs of multimodal brain images from the Harvard dataset and performed extensive experiments. Seven methods and five metrics were selected for the comparison experiments. The results demonstrate that the proposed method achieved notable performance on both the visual quality and objective metric score among these seven approaches. Moreover, the proposed method takes the least time among all compared methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.