Abstract

Restoring high-quality CT images from low-dose CT counterparts is an ill-posed, nonlinear problem to which deep learning approaches have shown promising results compared to classical model-based approaches. Feedforward neural networks, whose output at any given time depends only on their input at that specific time slot, have been widely used to produce CT images. In this article, a framework is presented wherein a recurrent neural network (RNN) is utilized to remove the streaking artefacts from few-view CT imaging. In our approach, the spatial information from the sparse-view CT image is mapped into a temporal format before giving it to the RNN, by subdividing it into small patches. This has the advantage of getting a network to process a small patch of the image at each moment. The results indicate similar image restoration performance for the RNN compared to the feedforward network in low-noise cases, while at high noise levels the RNN returns better results in terms of the mean-squared error. The computational costs are compared between RNN and feedforward networks, which indicates that processing each image patch needs less computational power in the RNN network, while in total, the feedforward network is more efficient.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call