Abstract

Magnetic Resonance Images (MRIs) of different modalities have different reference values for pathological diagnosis. But it is difficult to obtain multimodality MRIs. So, medical image synthesis has been proposed as an effective solution, with which any missing modalities are synthesized from the existing ones. To train a multimodal MRI synthesizer with limited number of unpaired MRIs, in this paper, we have proposed a novel High-dimensional Knowledge Guided Generative Adversarial Network (HKG-GAN). In our HKG-GAN, a cross-dimensional knowledge transfer network is utilized to extract features from 2D images (slices of MRIs) to measure the perceptual similarity of images of source and synthesized modalities, whose knowledge is transferred from a pre-trained 3D network without accessing its private training dataset. Nevertheless, based on code-splitting and cross-decoding, HKG-GAN is a one-for-all network that encodes MRIs into content codes and style codes, and then cross-decodes the encoding of a random image of different modality to convert MRI to target modality. The effectiveness has been proofed through comparative experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call