Abstract

Traditionally, a tomographic image is obtained by mathematical inversion of the encoding function of the imaging wave for a given set of measured data from different angular positions. While continuous effort has been made in imaging with reduced angular measurements over the years, tomographic imaging with ultra-sparse sampling has yet to be realized. In this work, we push the sparsity to the limit of a single projection and demonstrate what seemingly unlikely scenario of a single-view tomographic imaging is readily achievable by leveraging from the state-of-the-art deep learning technique and seamless integration of prior knowledge in the deep learning-based image reconstruction process. We develop a hierarchical neural network for X-ray CT imaging with ultra-sparse views and develop a structured training process for deep learning to bridge the dimensionality in X-ray imaging. The essences of our approach are the introduction of a novel feature domain transformation between a 2D projection and 3D volumetric CT image, and the construction of a robust encoding/decoding framework. To evaluate the feasibility of the proposed approach, the trained network is deployed on our generated dataset. We collect 3D CT dataset from clinical patient’s database, and digitally produce projection images from CT images using the geometry consistent with a clinical on-board cone-beam CT system for radiation therapy. Data augmentation with a series of translation, rotation, and organ deformation to the 3D CT are used to produce annotated data pairs that mimic different imaging situations. After adequate training, we deploy the deep learning model inference for single-view 3D volumetric reconstruction on two cases. The reconstructed results are compared with ground truth both qualitatively and quantitatively. From results’ illustration, we observe the deep learning-derived images closely resemble the target images, indicating the potential of the proposed model for volumetric imaging. For quantized computation, the averaged MAE/RMSE/SSIM/PSNR values over all testing samples for single-view reconstruction are 0.018, 0.177, 0.929, and 30.523, respectively, for case 1. The indices for case 2 are found to be 0.025, 0.385, 0.838, and 27.157, respectively. The qualitative and quantitative results demonstrate that our model is capable of achieving high-quality 3D image reconstruction even with only a single or few 2D projection. We have presented a novel deep learning framework for volumetric imaging with ultra-sparse data sampling. The mechanism here is fundamentally different from the traditional approaches in that the image formation is driven by prior knowledge casted in the deep learning model. This work pushes the boundary of tomographic imaging to the single-view limit. Practically, the single-view imaging may present a useful solution in some image-guided interventional procedures with simplified imaging system design.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call