Abstract

In this paper, we propose the medical Wasserstein generative adversarial networks (MWGAN), an end-to-end model, for fusing magnetic resonance imaging (MRI) and positron emission tomography (PET) medical images. Our method establishes two adversarial games between a generator and two discriminators to generate a fused image with the details of soft tissue structures in organs from MRI images and the functional and metabolic information from PET images. Different information from source images can be effectively adjusted with a specifically designed loss function. In addition, we use WGAN instead of the traditional generative adversarial networks to make the training process more stable and allow our architecture to deal with source images of different resolutions. Qualitative and quantitative comparisons on publicly available datasets demonstrate the superiority of MWGAN over the state-of-the-art networks. Furthermore, our MWGAN is applied to the fusion of MRI and computed tomography images of different resolutions, achieving a satisfactory performance.

Highlights

  • Medical image fusion makes full use of multi-source images to obtain complementary information, which makes clinical diagnosis and treatment more accurate and perfect

  • EXPERIMENTAL RESULTS we verify the performance of our medical Wasserstein generative adversarial networks (MWGAN) on publicly available datasets with comparison to the stateof-the-art fusion methods

  • We extract I from the positron emission tomography (PET) images, which leads to 83 pairs of magnetic resonance imaging (MRI) and IPET images with a same resolution of 256 × 256

Read more

Summary

Introduction

Medical image fusion makes full use of multi-source images to obtain complementary information, which makes clinical diagnosis and treatment more accurate and perfect. In the field of medical imaging, there are magnetic resonance imaging (MRI) images that capture details of organs’ soft tissue structures (e.g., texture detail information) and positron emission tomography (PET) images that provide functional and metabolic information (e.g., pixel intensity information) [1]. Fusing these images ensures that the resulting image will have both soft tissue details and functional and metabolic information. The key to fusing source images from different sensors is to extract the most important information of the source images into a single image. Different schemes for image fusion have been developed, including

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.