High-resolution medical images can be effectively used for clinical diagnosis. However, the acquisition of high-resolution images is difficult and often limited by medical instruments. Super-resolution (SR) methods provide a solution, where high-resolution (HR) images can be reconstructed from low-resolution (LR) ones. Most of existing deep neural networks for 3D SR medical images trained in a non-blind process, where LR images are directly degraded from HR data via a pre-determined downscale method. Such approaches rely heavily on the assumed degradation model, resulting in inevitable deviations in real clinical practice. Blind super-resolution, as a more attractive research line for this field, aims to generate HR images from LR inputs containing unknown degradation. Towards generalizing SR models for diverse types of degradation, we propose a robust blind SR of 3D medical images in an unsupervised manner with domain correction and upscaling treatment. First, a CycleGAN-based architecture is implemented to generate the LR data from the source domain to the target one for domain correction. Then, an upscaling network is learned via pre-determined HR-LR couples for reconstruction. The proposed framework is able to automatically learn noisy and blurry correction kernels for unpaired 3D SR magnetic resonance images (MRI). Our method achieves better and more robust performances in reconstruction of HR images from LR MRI with multiple unknown degradation processes, and show its superiority to other state-of-the-art supervised models and cycle-consistency based methods, especially in severe distortion cases.
Read full abstract