Abstract

High resolution (HR) magnetic resonance images (MRI) provide rich tissue anatomical information that enables accurate diagnostics and pathological analysis. However, the acquisition of HR MRI is limited by hardware restrictions, scanning time, and signal-to-noise ratio (SNR) in clinical applications. Recently, deep learning has shown promising power for improving the spatial resolution of MRI. In this study, we propose a multilevel and parallel Conv-Deconv super-resolution (CDSR) network to reconstruct high-quality HR MRI from low resolution (LR) inputs. Different from current SR methods based on convolutional neural networks (CNNs), we connect parallel 3D convolution and deconvolution filters to capture context information and extract multi-level features. Hierarchical features are adaptively upsampled using each of their following deconvolution layers and then fused together to recover the HR details. In order to alleviate the optimization difficulty, we introduce the interpolated input to the fused output, which performs like a cross-scale residual learning strategy, hence accelerates the convergence. Extensive experimental results on three benchmark datasets show that our proposed method outperforms current reported MRI SR methods and sets a new state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call