Abstract

Deep learning techniques have led to state-of-the-art image super resolution with natural images. Normally, pairs of high-resolution and low-resolution images are used to train the deep learning models. These techniques have also been applied to medical image super-resolution. The characteristics of medical images differ significantly from natural images in several ways. First, it is difficult to obtain high-resolution images for training in real clinical applications due to the limitations of imaging systems and clinical requirements. Second, other modal high-resolution images are available (e.g., high-resolution T1-weighted images are available for enhancing low-resolution T2-weighted images). In this paper, we propose an unsupervised image super-resolution technique based on simple prior knowledge of the human anatomy. This technique does not require target T2WI high-resolution images for training. Furthermore, we present a guided residual dense network, which incorporates a residual dense network with a guided deep convolutional neural network for enhancing the resolution of low-resolution images by referring to different modal high-resolution images of the same subject. Experiments on a publicly available brain MRI database showed that our proposed method achieves better performance than the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.