Abstract

Magnetic resonance (MR) imaging is an important computer aided diagnosis techniques with rich pathological information. Due to the factor of physical and physiological constraint, it affects the applicability of that technique seriously. However, computed tomography (CT)-based radiotherapy is more popular on account of its imaging rapidity and environmental simplicity. Therefore, it is of great theoretical and practical significance to design a method that can construct MR image from corresponding CT image. In this paper, we treat MR imaging as a machine vision problem and propose a multiconditional generative adversarial network (GAN) for MR imaging from CT scan data. Considering reversibility of GAN, both generator and reverse generator are designed for MR and CT imaging respectively, which can constrain each other and improve consistency between features of CT and MR images. In addition, we use VGG16 model to extract semantic features, perception error and voxel error fusing with original GAN loss is designed to enhance similarity of MR image structure and detail texture features. The experimental results with challenging public CT-MR imaging dataset show distinct performance improvement over other GANs utilized in medical imaging and demonstrate the effect of our method for medical image modal transformation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.