Abstract

One general practice for radiotherapy in acquiring the magnetic resonance (MR) images is to acquire longitudinally coarse slices while keeping the in-plane spatial resolution high in order to shorten the scan time as well as to ensure enough body coverage. The purpose of this work is to develop a deep learningbased method for synthesizing longitudinal high-resolution (HR) MR images using parallel trained cycleconsistent generative adversarial networks (CycleGANs) with self-supervision. The parallel CycleGANs independently predict HR MR images in two planes along the directions that is orthogonal to the longitudinal MR scan direction. These predicted images are fused to generate the final synthetic HR MR images. MR images in the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were processed to investigate the performance of the proposed workflow with qualitative, by visual inspections on the image appearance, and quantitative, by calculations of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM), evaluations. Preliminary results show that the proposed method can generate HR MR images visually indistinguishable from the ground truth MR images. Quantitative evaluations show that the calculated metrics of synthetic HR MR images can all be enhanced for the T1, T1CE, T2 and FLAIR images. The feasibility of the proposed method to synthesize HR MR images using self-supervised parallel CycleGANs is demonstrated and its potential usage in common clinical practices of radiotherapy is expected.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call