Abstract

In magnetic resonance imaging (MRI), several images can be obtained using different imaging settings (e.g. T1, T2, DWI, and Flair). These images have similar anatomical structures but are with different contrasts, which provide a wealth of information for diagnosis. However, the images under specific imaging settings may not be available due to the limitation of scanning time or corruption caused by noises. It is attractive to derive missing images with some settings from the available MR images. In this paper, we propose a novel end-to-end multisetting MR image synthesis method. The proposed method is based on generative adversarial networks (GANs) - a deep learning model. In the proposed method, different MR images obtained by different settings are used as the inputs of a GANs and each image is encoded by an encoder. Each encoder includes a refinement structure which is used to extract a multiscale feature map from an input image. The multiscale feature maps from different input images are then fused to generate several desired target images under specific settings. Because the resultant images obtained with GANs have blurred edges, we fuse gradient prior information in the model to protect high frequency information such as important tissue textures of medical images. In the proposed model, the multiscale information is also adopted in the adversarial learning (not just in the generator or discriminator) so that we can produce high quality synthesized images. We evaluated the proposed method on two public datasets: BRATS and ISLES. Experimental results demonstrate that the proposed approach is superior to current state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.