Abstract
Super resolution reconstruction can be used to recover a high resolution image from a low resolution image and is particularly beneficial for clinically significant medical images in diagnosis, treatment, and research applications. However, super resolution is a challenging inverse problem due to its ill-posed nature. In this paper, inspired by recent developments in deep learning, a super resolution algorithm (SR-DCNN) is proposed for medical images that is based on a neural network and employs a deconvolution operation. The purpose of the deconvolution is to effectively establish an end-to-end mapping between the low and high resolution images. First, training data consisting of 1500 medical images of the lung, brain, heart, and spine, was collected, down-sampled, and input into the neural network. Then, patch-based image features were extracted using a set of filters and the parametric rectified linear unit (PReLU) was subsequently applied as the activation function. Finally, these extracted image features were used to reconstruct high resolution images by minimizing the loss between the predicted output image and the original high resolution image. Various network structures and hyper parameter settings were explored to achieve a good trade-off between performance and computational efficiency, based on which a four-layer network was found to achieve the best result in terms of the peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), information entropy (IE), and execution speed. The network was then validated on test data, and it was demonstrated that the proposed SR-DCNN algorithm quantitatively and qualitatively outperformed the current state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.