Abstract

We propose a learning-based method to synthesize high-resolution (HR) CT images from low-resolution (LR) CT images. A self-super-resolution framework with cycle consistent generative adversarial network (CycleGAN) is proposed. As an ill-posed problem, recent super-resolution methods rely on the presence of external/training atlases to learn the transform LR images to HR images, which is often not available for CT imaging to have high resolution for slice thickness. To circumvent the lack of HR training data in z-axis, the network learns the mapping from LR 2D transverse plane slices to HR 2D transverse plane slices via CycleGAN and inference HR 2D sagittal and coronal plane slices by feeding these sagittal and coronal slices into the trained CycleGAN. The 3D HR CT image is then reconstructed by collecting these HR 2D sagittal and coronal slices and image fusion. In addition, in order to force the ill-posed LR to HR mapping to be close to a one-to-one mapping, CycleGAN is used to model the mapping. To force the network focusing on learning the difference between LR and HR image, residual network is integrated into the CycleGAN. To evaluate the proposed method, we retrospectively investigate 20 brain datasets. For each dataset, the original CT image volume was served as ground truth and training target. Low-resolution CT volumes were simulated by downsampling the original CT images at slice thickness direction. The MAE is 17.9±2.9 HU and 25.4±3.7 HU for our results at downsampling factor of 3 and 5, respectively. The proposed method has great potential in improving the image resolution for low pitch scan without hardware modification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call