Abstract

Multi-modal medical image fusion is a long-standing important research topic that can obtain informative medical images and assist doctors diagnose and treat diseases more efficiently. However, most fusion methods extract and fuse features by subjectively defining constraints, which easily distorts the unique information of source images. In this work, we present a novel end-to-end unsupervised network to fuse multi-modal medical images. It is composed of a generator and two symmetrical discriminators. The former aims to generate a ”real-like” fused image based on a specifically designed content and structure loss, while the latter are devoted to distinguishing the differences between the fused image and the source ones. They are trained alternately until discriminators cannot distinguish the fused image from the source ones. In addition, the symmetrical discriminator scheme is conducive to maintaining the feature consistency among different modalities. More importantly, to enhance the retention degree of texture details, U-Net is adopted as the generator heuristically, where the up-sampling method is modified to bilinear interpolation for avoiding checkerboard artifacts. As for the optimization, we define the content loss function, which preserves the gradient information and pixel activity of source images. Both visual analysis and quantitative evaluation of experimental results show the superiority of our method as compared to the cutting-edge baselines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.