Abstract

The multi-sensor, multi-modal, composite design of medical images merged into a single image, contributes to identifying features that are relevant to medical diagnoses and treatments. Although, current image fusion technologies, including conventional and deep learning algorithms, can produce superior fused images, however, they will require huge volumes of images of various modalities. This solution may not be viable for some situations, where time efficiency is expected or the equipment is inadequate. This paper addressed a modified end-to-end Generative Adversarial Network(GAN), termed Loss Minimized Fusion Generative Adversarial Network (LMF-GAN), a triple ConvNet deep learning architecture for the fusion of medical images with a limited sampling rate. The encoding network is combined with a convolutional neural network layer and a dense block called GAN, in contrast to conventional convolutional networks. The loss is minimized by training GAN’s discriminator with all the source images by learning more parameters to generate more features in the fused image. The LMF-GAN can produce fused images with clear textures through adversarial training of the generator and discriminator. The proposed fusion method has the ability to achieve state-of-the-art quality in objective and subjective evaluation, in comparison with current fusion methods. The model has experimented with standard data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call