Abstract

ABSTRACT The Feldkamp, Davis and Kress algorithm is a computationally efficient reconstruction method for three-dimensional cone-beam computed tomography. However, it suffers from severe artefacts when the number of projections is insufficient. Although recent deep learning–based methods have succeeded in reconstructing such sparse-view projections, it is still challenging to reconstruct a large 3D volume efficiently because of heavy memory consumption and difficulty in obtaining sufficient training data. Therefore, we propose a deep learning method to overcome these drawbacks. Our method consecutively reconstructs short bars in 3D CT volume using the intensities of pixels on the moving trajectory projected on the detector of these bars. Then, we feed such pixel intensities into a neural network and train it to simulate the filtering and back-projection processes of the FDK algorithm. Since the reconstruction volume is separated into bars, the neural network can be used with only a small amount of memory. Furthermore, the network can be trained sufficiently with only a few training samples because plenty of bar data can be extracted even from a single CT image. We experimentally demonstrate that our approach works efficiently for both simulated and actual sparse-view CBCT data using training data extracted from only a single CT image.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call