Abstract

BackgroundAs medical images contain sensitive patient information, finding a publicly accessible dataset with patient permission is challenging. Furthermore, few large-scale datasets suitable for training image-fusion models are available. To address this issue, we propose a medical image-fusion model based on knowledge distillation (KD) and an explainable AI module-based generative adversarial network with dual discriminators (KDE-GAN). MethodKD reduces the size of the datasets required for training by refining a complex image-fusion model into a simple model with the same feature-extraction capabilities as the complex model. The images generated by the explainable AI module show whether the discriminator can distinguish true images from false images. When the discriminator precisely judges the image based on the key features, the training can be stopped early, reducing overfitting and the amount of data required for training. ResultsBy training using only small-scale datasets, the trained KDE-GAN can generate clear fused images. KDE-GAN fusion results were evaluated quantitatively using five metrics: spatial frequency, structural similarity, edge information transfer factor, normalized mutual information, and nonlinear correlation information entropy. ConclusionExperimental results show that the fused images generated by KDE-GAN are superior to state-of-the-art methods, both subjectively and objectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call