Abstract

Multi-modality magnetic resonance (MR) images provide complementary information for disease diagnoses. However, modality missing is quite usual in real-life clinical practice. Current methods usually employ convolution-based generative adversarial network (GAN) or its variants to synthesize the missing modality. With the development of vision transformer, we explore its application in the MRI modality synthesis task in this work. We propose a novel supervised deep learning method for synthesizing a missing modality, making use of a transformer-based encoder. Specifically, a model is trained for translating 2D MR images from T1-weighted to T2-weighted based on conditional GAN (cGAN). We replace the encoder with transformer and input adjacent slices to enrich spatial prior knowledge. Experimental results on a private dataset and a public dataset demonstrate that our proposed model outperforms state-of-the-art supervised methods for MR image synthesis, both quantitatively and qualitatively. Clinical relevance- This work proposes a method to synthesize T2-weighted images from T1-weighted ones to address the modality missing issue in MRI.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.