Abstract

While MR-only treatment planning using synthetic CTs (synCTs) offers potential for streamlining clinical workflow, a need exists for an efficient and automated synCT generation in the brain to facilitate near real-time MR-only planning. This work describes a novel method for generating brain synCTs based on generative adversarial networks (GANs), a deep learning model that trains two competing networks simultaneously, and compares it to a deep convolutional neural network (CNN). Post-Gadolinium T1-Weighted and CT-SIM images from fifteen brain cancer patients were retrospectively analyzed. The GAN model was developed to generate synCTs using T1-weighted MRI images as the input using a residual network (ResNet) as the generator. The discriminator is a CNN with five convolutional layers that classified the input image as real or synthetic. Fivefold cross-validation was performed to validate our model. GAN performance was compared to CNN based on mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics between the synCT and CT images. GAN training took ~11h with a new case testing time of 5.7±0.6s. For GAN, MAEs between synCT and CT-SIM were 89.3±10.3 Hounsfield units (HU) and 41.9±8.6HU across the entire FOV and tissues, respectively. However, MAE in the bone and air was, on average, ~240-255HU. By comparison, the CNN model had an average full FOV MAE of 102.4±11.1HU. For GAN, the mean PSNR was 26.6±1.2 and SSIM was 0.83±0.03. GAN synCTs preserved details better than CNN, and regions of abnormal anatomy were well represented on GAN synCTs. We developed and validated a GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds. Our method offers strong potential for supporting near real-time MR-only treatment planning in the brain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call